2022
DOI: 10.1016/j.ijcci.2021.100409
|View full text |Cite
|
Sign up to set email alerts
|

Learning analytics of embodied design: Enhancing synergy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
10
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(12 citation statements)
references
References 26 publications
(18 reference statements)
1
10
0
Order By: Relevance
“…With the embodiment turn has emerged methods for collecting and analyzing multimodal data to model embodied interactions (Worsley and Blikstein, 2018;Abrahamson et al, 2021). These include data for analyzing gestures (Closser et al, 2021), eye gaze (Schneider and Pea, 2013;Shvarts and Abrahamson, 2019), facial expression (Monkaresi et al, 2016;Sinha, 2021), grip intensity (Laukkonen et al, 2021), and so on, coupled with traditional statistical methods, qualitative methods, and deep learning algorithms that model human behavior based on massive amounts of mouse click and text-based data (e.g., Facebook's DeepText, Google's RankBrain).…”
Section: Growth Of Multimodal Learning Analyticsmentioning
confidence: 99%
“…With the embodiment turn has emerged methods for collecting and analyzing multimodal data to model embodied interactions (Worsley and Blikstein, 2018;Abrahamson et al, 2021). These include data for analyzing gestures (Closser et al, 2021), eye gaze (Schneider and Pea, 2013;Shvarts and Abrahamson, 2019), facial expression (Monkaresi et al, 2016;Sinha, 2021), grip intensity (Laukkonen et al, 2021), and so on, coupled with traditional statistical methods, qualitative methods, and deep learning algorithms that model human behavior based on massive amounts of mouse click and text-based data (e.g., Facebook's DeepText, Google's RankBrain).…”
Section: Growth Of Multimodal Learning Analyticsmentioning
confidence: 99%
“…One way that the intersection between LA and MSEs has begun to interface is in the form of MMLA [32]. Inspired by micro-ethnographic and interaction-analysis methodologies, MMLA aims to harness the power of sensor data and computational analysis to better understand and support student learning [8]. MMLA research focuses primarily on microlevel data and acts as a virtual observer and analyst of microlevel learning activities [21].…”
Section: B Multisensory Environments and Multimodal Learning Analyticsmentioning
confidence: 99%
“…This is due to the potential of MSEs to activate and scaffold different communication and interaction modalities, as well as the potential of the produced MMLAs to provide granular and timely insights [33]. Moreover, such multimodal and rich data can provide novel affordances that enhance learning (e.g., affective learning [110] and embodied learning [8]).…”
Section: Implication For Design and Practicementioning
confidence: 99%
See 2 more Smart Citations