Students' on‐task engagement during adaptive learning activities has a significant effect on their performance, and at the same time, how these activities influence students' behavior is reflected in their effort exertion. Capturing and explaining effortful (or effortless) behavior and aligning it with learning performance within contemporary adaptive learning environments, holds the promise to timely provide proactive and actionable feedback to students. Using sophisticated machine learning (ML) algorithms and rich learner data, facilitates inference‐making about several behavioral aspects (including effortful behavior) and about predicting learning performance, in any learning context. Researchers have been using ML methods in a “black‐box” approach, ie, as a tool where the input data is the learner data and the output is a given class from the chosen construct. This work proposes a methodological shift from the “black‐box” approach to a “grey‐box” approach that bridges the hypothesis/literature‐driven (feature extraction) “white‐box” approach with the computation/data‐driven (feature fusion) “black‐box” approach. This will allow us to utilize data features that are educationally and contextually meaningful. This paper aims to extend current methodological paradigms, and puts into practice the proposed approach in an adaptive self‐assessment case study taking advantage of new, cutting‐edge, interdisciplinary work on building pipelines for educational data, using innovative tools and techniques. What is already known about this topic Capturing and measuring learners' engagement and behavior using physiological data has been explored during the last years and exhibits great potential. Effortless behavioral patterns commonly exhibited by learners, such as “cheating,” “guessing” or “gaming the system” counterfeit the learning outcome. Multimodal data can accurately predict learning engagement, performance and processes. What this paper adds Generalizes a methodology for building machine learning pipelines for multimodal educational data, using a modularized approach, namely the “grey‐box” approach. Showcases that fusion of eye‐tracking, facial expressions and arousal data provide the best prediction of effort and performance in adaptive learning settings. Highlights the importance of fusing data from different channels to obtain the most suited combinations from the different multimodal data streams, to predict and explain effort and performance in terms of pervasiveness, mobility and ubiquity. Implications for practice and/or policy Learning analytics researchers shall be able to use an innovative methodological approach, namely the “grey‐box,” to build machine learning pipelines from multimodal data, taking advantage of artificial intelligence capabilities in any educational context. Learning design professionals shall have the opportunity to fuse specific features of the multimodal data to drive the interpretation of learning outcomes in terms of physiological learner states. The constraints from th...
Predicting student's performance is a challenging, yet complicated task for institutions, instructors and learners. Accurate predictions of performance could lead to improved learning outcomes and increased goal achievement. In this paper we explore the predictive capabilities of student's time-spent on answering (in-)correctly each question of a multiple-choice assessment quiz, along with student's final quiz-score, in the context of computer-based testing. We also explore the correlation between the time-spent factor (as defined here) and goal-expectancy. We present a case study and investigate the value of using this parameter as a learning analytics factor for improving prediction of performance during computer-based testing. Our initial results are encouraging and indicate that the temporal dimension of learning analytics should be further explored.
Practising self‐regulated learning (SRL) has been proposed to develop learning autonomy. However, there is lack of empirical evidence on how SRL strategies affect autonomous learning capacity. This study attempts to bridge that gap by utilizing the learners’ trace data for measuring the learners’ autonomous interactions, and investigates the effects of four SRL strategies on learners’ autonomous choices. The goal is to explain how the employed SRL strategies impact autonomous control (in terms of frequencies of self‐enforced decisions, as well as time‐spent on decision making). The results from an exploratory study with undergraduate learners (N = 113) shown that goal‐setting and time‐management have strong positive effects on autonomous control, effort‐regulation moderately positively affects learners’ autonomy, while help‐seeking has a strong negative effect. These findings provide empirical evidence and contribute to clarifying the role of each one of the SRL strategies in the development of autonomous learning capacity, from a learning analytics perspective. Limitations and potential implications for research and practice are also discussed.
Responsible AI is concerned with the design, implementation and use of ethical, transparent, and accountable AI technology in order to reduce biases, promote fairness, equality, and to help facilitate interpretability and explainability of outcomes, which are particularly pertinent in a healthcare context. However, the extant literature on health AI reveals significant issues regarding each of the areas of responsible AI, posing moral and ethical consequences. This is particularly concerning in a health context where lives are at stake and where there are significant sensitivities that are not as pertinent in other domains outside of health. This calls for a comprehensive analysis of health AI using responsible AI concepts as a structural lens. A systematic literature review supported our data collection and sampling procedure, the corresponding analysis, and extraction of research themes helped us provide an evidence-based foundation. We contribute with a systematic description and explanation of the intellectual structure of Responsible AI in digital health and develop an agenda for future research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.