“…With the embodiment turn has emerged methods for collecting and analyzing multimodal data to model embodied interactions (Worsley and Blikstein, 2018;Abrahamson et al, 2021). These include data for analyzing gestures (Closser et al, 2021), eye gaze (Schneider and Pea, 2013;Shvarts and Abrahamson, 2019), facial expression (Monkaresi et al, 2016;Sinha, 2021), grip intensity (Laukkonen et al, 2021), and so on, coupled with traditional statistical methods, qualitative methods, and deep learning algorithms that model human behavior based on massive amounts of mouse click and text-based data (e.g., Facebook's DeepText, Google's RankBrain).…”