“…We maintain that gestures are a rich and worthwhile source of information and engender insight into a learner's current state of knowledge and are worth the extra effort. There has also been some research on capturing gestures during virtual chemistry labs, see Aldosari and Marocco, 2015. Gestures: In this study, the recorded recalls were transcribed after the intervention and we tallied both verbal idea units and number of iconic/representational gestures (McNeill, 1992;McNeill, 2008;McNeil et al, 2009). Hostetter and Alibali (2019) state that, while speaking, the likelihood of a gesture at a particular moment .…”
Section: New Assessments: Gestures and Idea Unitsmentioning
Researchers, educators, and multimedia designers need to better understand how mixing physical tangible objects with virtual experiences affects learning and science identity. In this novel study, a 3D-printed tangible that is an accurate facsimile of the sort of expensive glassware that chemists use in real laboratories is tethered to a laptop with a digitized lesson. Interactive educational content is increasingly being placed online, it is important to understand the educational boundary conditions associated with passive haptics and 3D-printed manipulables. Cost-effective printed objects would be particularly welcome in rural and low Socio-Economic (SES) classrooms. A Mixed Reality (MR) experience was created that used a physical 3D-printed haptic burette to control a computer-based chemistry titration experiment. This randomized control trial study with 136 college students had two conditions: 1) low-embodied control (using keyboard arrows), and 2) high-embodied experimental (physically turning a valve/stopcock on the 3D-printed burette). Although both groups displayed similar significant gains on the declarative knowledge test, deeper analyses revealed nuanced Aptitude by Treatment Interactions (ATIs). These interactions favored the high-embodied experimental group that used the MR device for both titration-specific posttest knowledge questions and for science efficacy and science identity. Those students with higher prior science knowledge displayed higher titration knowledge scores after using the experimental 3D-printed haptic device. A multi-modal linguistic and gesture analysis revealed that during recall the experimental participants used the stopcock-turning gesture significantly more often, and their recalls created a significantly different Epistemic Network Analysis (ENA). ENA is a type of 2D projection of the recall data, stronger connections were seen in the high embodied group mainly centering on the key hand-turning gesture. Instructors and designers should consider the multi-modal and multi-dimensional nature of the user interface, and how the addition of another sensory-based learning signal (haptics) might differentially affect lower prior knowledge students. One hypothesis is that haptically manipulating novel devices during learning may create more cognitive load. For low prior knowledge students, it may be advantageous for them to begin learning content on a more ubiquitous interface (e.g., keyboard) before moving them to more novel, multi-modal MR devices/interfaces.
“…We maintain that gestures are a rich and worthwhile source of information and engender insight into a learner's current state of knowledge and are worth the extra effort. There has also been some research on capturing gestures during virtual chemistry labs, see Aldosari and Marocco, 2015. Gestures: In this study, the recorded recalls were transcribed after the intervention and we tallied both verbal idea units and number of iconic/representational gestures (McNeill, 1992;McNeill, 2008;McNeil et al, 2009). Hostetter and Alibali (2019) state that, while speaking, the likelihood of a gesture at a particular moment .…”
Section: New Assessments: Gestures and Idea Unitsmentioning
Researchers, educators, and multimedia designers need to better understand how mixing physical tangible objects with virtual experiences affects learning and science identity. In this novel study, a 3D-printed tangible that is an accurate facsimile of the sort of expensive glassware that chemists use in real laboratories is tethered to a laptop with a digitized lesson. Interactive educational content is increasingly being placed online, it is important to understand the educational boundary conditions associated with passive haptics and 3D-printed manipulables. Cost-effective printed objects would be particularly welcome in rural and low Socio-Economic (SES) classrooms. A Mixed Reality (MR) experience was created that used a physical 3D-printed haptic burette to control a computer-based chemistry titration experiment. This randomized control trial study with 136 college students had two conditions: 1) low-embodied control (using keyboard arrows), and 2) high-embodied experimental (physically turning a valve/stopcock on the 3D-printed burette). Although both groups displayed similar significant gains on the declarative knowledge test, deeper analyses revealed nuanced Aptitude by Treatment Interactions (ATIs). These interactions favored the high-embodied experimental group that used the MR device for both titration-specific posttest knowledge questions and for science efficacy and science identity. Those students with higher prior science knowledge displayed higher titration knowledge scores after using the experimental 3D-printed haptic device. A multi-modal linguistic and gesture analysis revealed that during recall the experimental participants used the stopcock-turning gesture significantly more often, and their recalls created a significantly different Epistemic Network Analysis (ENA). ENA is a type of 2D projection of the recall data, stronger connections were seen in the high embodied group mainly centering on the key hand-turning gesture. Instructors and designers should consider the multi-modal and multi-dimensional nature of the user interface, and how the addition of another sensory-based learning signal (haptics) might differentially affect lower prior knowledge students. One hypothesis is that haptically manipulating novel devices during learning may create more cognitive load. For low prior knowledge students, it may be advantageous for them to begin learning content on a more ubiquitous interface (e.g., keyboard) before moving them to more novel, multi-modal MR devices/interfaces.
“…So, it is necessary to realize intelligent virtual experimental classrooms. Moreover, Aldosari and Marocco [21] used tactile and visual methods to simulate visualization experiments but lacked an understanding of user intention. Barmaki et al [22] used Kinect (published by Microsoft) full body gesture recognition.…”
Virtual experiments have become an interesting research topic in the field of education. However, we found that there are some limitations in the current virtual experiments: first, the researchers used the virtual effects of the simulation to represent the virtual experiments, which led to decrease the immersion of the user's simulated experiments; second, most of the virtual experiments are only mouse or touch screen interactive mode, which reduces the realism of user simulation experiments; third, students independently explore the experimental operation process and spend too much time simulating the experiment, which leads to problems such as overloading the operation and low interaction efficiency. In order to solve the above problems, we propose and implement a multimodal navigational interaction virtual and real fusion chemistry laboratory (MNIVRFCL). We design a new sensing structure intelligent equipment and propose a multimodal navigational interaction algorithm (MMNI) based on auditory and tactile channel, which are verified and applied in MNIVRFCL. The MMNI algorithm can detect user's specific behaviors to understand their behavioral intentions, and then the system guide and rectify users' current correct or incorrect behaviors in the form of voice navigation broadcasts. Finally, we achieve the purpose that students can use virtual and real fusion interactions through tactile and auditory channels, they can independently complete simulations and learning experiments based on experimental navigation. The experimental statistic results show that system's successful understanding of intention is 91.48%, and prove the MNIVRFCL operational load reduce by 23.81% compared to the pure virtual experiment, it can reduce the time consumption and improves the students' interaction efficiency.
“…In 2014, Ali [17] established a multimodal virtual chemistry laboratory that combined vision and hearing. In 2015, Aldosari and Marocco [18] combined touch and gesture in virtual chemistry experiments. In 2017, Isabwe et al [19] presented a virtual reality based solution for learning.…”
Virtual experiment is an important field of human-computer interaction. With more and more virtual laboratories emerging, we found that problems regarding virtual experiments are rising. Such problems can be listed as follows: First, human-computer interaction has lower efficiency during the process of virtual experiment, which means the computer cannot understand the user's intention thus leading to incorrect operation. Second, there are less detections for false behavior during experiments. Third, the virtual laboratory's sense of operation and realism is not strong. In order to solve the above problems, the multimodal sensing navigation virtual and real fusion laboratory (MSNVRFL) was designed and implemented in this paper. We design a new set of experimental equipment with the function of cognition and study a multimodal fusion model and algorithm for chemical experiments, which are both finally verified and applied in MSNVRFL. By using multimodal fusion perception algorithm, the user's true intentions can be understood and the human-computer interaction efficiency can be improved. By carrying out a virtual experiment with the mold of virtual and real fusion, problems like resources wasting and dangers happened during experiment can be avoided, user's sense of operation and realism can be improved. In addition, teaching navigation and wrong operation behavior reminders are provided for users. The experimental result shows that our method can improve the efficiency of human-computer interaction, reduce the user's cognitive load, strengthen the user's sense of reality and operation and stimulate students' interest in learning. INDEX TERMS Multimodal fusion, virtual experiments, intelligent teaching, human-computer interaction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.