“…MMLA researchers have utilized a variety of data sources (such as audio, video, eye gaze and skin‐conductance) for modelling collaboration (Pugh et al., 2022; Reilly & Schneider, 2019). From the collected data, features ranging from speaking time, turn‐taking and joint‐visual attention to facial action units and emotions have been extracted for understanding and modelling collaboration (Cai et al., 2020; Chejara, Prieto, Rodríguez‐Triana, Ruiz‐Calleja, Kasepalu, et al., 2023; Martinez‐Maldonado et al., 2013; Reilly & Schneider, 2019).…”