2020
DOI: 10.1109/access.2020.3005956
|View full text |Cite
|
Sign up to set email alerts
|

A Structure Design of Virtual and Real Fusion Intelligent Equipment and Multimodal Navigational Interaction Algorithm

Abstract: Virtual experiments have become an interesting research topic in the field of education. However, we found that there are some limitations in the current virtual experiments: first, the researchers used the virtual effects of the simulation to represent the virtual experiments, which led to decrease the immersion of the user's simulated experiments; second, most of the virtual experiments are only mouse or touch screen interactive mode, which reduces the realism of user simulation experiments; third, students … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 32 publications
(34 reference statements)
0
6
0
Order By: Relevance
“…In this paper, volunteers were asked to participate in two experiments: (1) the AR experiment of intention acquisition using the MMNI algorithm [ 26 ] and (2) the smart glove chemistry experiment of intention acquisition using the NIAMIU algorithm. Each volunteer must complete concentrated sulfuric acid dilution experiments in the comparison experiment three times, and the intention recognition rate is calculated by recording the number of successful recognitions (successful recognition means that the system prompts the user with the current operation intention by voice, and gives the response results of the operation in real time on the virtual‐reality fusion platform).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…In this paper, volunteers were asked to participate in two experiments: (1) the AR experiment of intention acquisition using the MMNI algorithm [ 26 ] and (2) the smart glove chemistry experiment of intention acquisition using the NIAMIU algorithm. Each volunteer must complete concentrated sulfuric acid dilution experiments in the comparison experiment three times, and the intention recognition rate is calculated by recording the number of successful recognitions (successful recognition means that the system prompts the user with the current operation intention by voice, and gives the response results of the operation in real time on the virtual‐reality fusion platform).…”
Section: Methodsmentioning
confidence: 99%
“…This paper invites 10 volunteers to perform an evaluation of a virtual‐reality fusion experiment, virtual experiment, [ 26 ] NOBOOK [ 1 ] experiment, and real experiment based on smart glove using NASA's TLX table. [ 28 ] We believe that the lower the final scores of mental demands (MD), physical demands (PD), time demands (TD), effort (E), and frustration (F) in the evaluation of the six indicators, the better the effect.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Kadavasal and Oliver [ 10 ] created a virtual-reality driving system for autistic individuals that included physiological signals, brain signals, and eye gaze information, to improve autistic patients' driving abilities. Due to the lack of practical utility and popularity of virtual-reality experimental teaching, Xiao et al [ 11 ] developed a multimodal interaction model that integrates voice and sensor information. Liu et al [ 12 ] proposed a deep learning-based multimodal fusion model that combines three modal data sets: voice commands, hand gestures, and body movements, using various deep neural networks.…”
Section: Related Workmentioning
confidence: 99%
“…Verification of MFA Algorithm. Although the TMFA [11] uses multichannel data information in the experimental process, its essence is serial fusion of multimodal information, which means that only one channel of information is used in each intent recognition. e essence of MFA is the parallel fusion of multimodal intent probability.…”
Section: Kinect Tracks and Recognizes User Gestures Occlusion Of Obje...mentioning
confidence: 99%