Learning analytics are often formatted as visualisations developed from traced data collected as students study in online learning environments. Optimal analytics inform and motivate students' decisions about adaptations that improve their learning. We observe that designs for learning often neglect theories and empirical findings in learning science that explain how students learn. We present six learning analytics that reflect what is known in six areas (we call them cases) of theory and research findings in the learning sciences: setting goals and monitoring progress, distributed practice, retrieval practice, prior knowledge for reading, comparative evaluation of writing, and collaborative learning. Our designs demonstrate learning analytics can be grounded in research on self-regulated learning and self-determination. We propose designs for learning analytics in general should guide students toward more effective self-regulated learning and promote motivation through perceptions of autonomy, competence, and relatedness.
Data used in learning analytics rarely provide strong and clear signals about how learners process content. As a result, learning as a process is not clearly described for learners or for learning scientists. Gašević, Dawson, and Siemens (2015) urged data be sought that more straightforwardly describe processes in terms of events within learning episodes. They recommended building on Winne’s (1982) characterization of traces — ambient data gathered as learners study that more clearly represent which operations learners apply to which information — and his COPES model of a learning event — conditions, operations, products, evaluations, standards (Winne, 1997). We designed and describe an open source, open access, scalable software system called nStudy that responds to their challenge. nStudy gathers data that trace cognition, metacognition, and motivation as processes that are operationally captured as learners operate on information using nStudy’s tools. nStudy can be configured to support learners’ evolving self-regulated learning, a process akin to personally focused, self-directed learning science.
Developing knowledge‐transforming skills in writing may help students increase learning by actively building knowledge, regardless of the domain. However, many undergraduate students struggle to transform knowledge when drafting essays based on multiple sources. Writing analytics can be used to scaffold knowledge transforming as writers bring evidence to bear in supporting claims. We investigated how to automatically identify sentences representing knowledge transformation in argumentative essays. A synthesis of cognitive theories of writing and Bloom's typology identified 22 linguistic features to model processes of knowledge transforming in a corpus of 38 undergraduates' essays. Findings indicate undergraduates mostly paraphrase or copy information from multiple sources rather than engage deeply with sources' content. Eight linguistic features were important for discriminating evidential sentences as telling versus transforming source knowledge. We trained a machine learning algorithm that accurately classified nearly three of four evidential sentences as knowledge‐telling or knowledge‐transforming, offering potential for use in future research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.