We often need to learn how to move based on a single performance measure that reflects the overall success of our movements. However, movements have many properties, such as their trajectories, speeds and timing of end-points, thus the brain needs to decide which properties of movements should be improved; it needs to solve the credit assignment problem. Currently, little is known about how humans solve credit assignment problems in the context of reinforcement learning. Here we tested how human participants solve such problems during a trajectory-learning task. Without an explicitly-defined target movement, participants made hand reaches and received monetary rewards as feedback on a trial-by-trial basis. The curvature and direction of the attempted reach trajectories determined the monetary rewards received in a manner that can be manipulated experimentally. Based on the history of action-reward pairs, participants quickly solved the credit assignment problem and learned the implicit payoff function. A Bayesian credit-assignment model with built-in forgetting accurately predicts their trial-by-trial learning.
Clinical interviews are a powerful method for assessing students' knowledge and conceptual development. However, the analysis of the resulting data is time-consuming and can create a "bottleneck" in large-scale studies. This article demonstrates the utility of computational methods in supporting such an analysis. Thirty-four 7th-grade student explanations of the causes of Earth's seasons were assessed using latent semantic analysis (LSA). Analyses were performed on transcriptions of student responses during interviews administered, prior to (n = 21) and after (n = 13) receiving earth science instruction. An instrument that uses LSA technology was developed to identify misconceptions and assess conceptual change in students' thinking. Its accuracy, as determined by comparing its classifications to the independent coding performed by four human raters, reached 90%. Techniques for adapting LSA technology to support the analysis of interview data, as well as some limitations, are discussed.Researchers in education have long been faced with the challenge of developing ever more improved methods for assessing students' knowledge. Understanding the variety of conceptions that students bring to instruction is a prerequisite for effecting conceptual change (Carey, 1988;Posner, Strike, Hewson, & Gertzog, 1982). For this reason, tools that help support assessments of students' understanding are welcomed commodities. Recently there has been much interest in developing computational instruments for evaluating student knowledge. In the long run, the development of such tools serves two main practical goals: to increase the level of rigor and objectivity in student assessment and to improve efficiency by enabling the processing and analysis of large amounts of data with little or no human supervision. Which computational approach is most promising is still an open question, one that depends to a large extent on the kind of data and the goals of the analysis.In this article, we report the results of experiments in the computational analysis of clinical interview data. Unlike other common methods for assessing students' knowledge, clinical interviews are not driven by tightly controlled scripts or protocols, but are rather unstructured, open-ended, and often discursive in style. The interviewer is encouraged to react to students' responses with digressions, questions, or small impromptu experiments. There are many advantages to using interviews for probing student thinking. In particular, unlike written assessments, which are not flexible enough to trace the complex dynamics of students' understanding, interviews offer a unique way to test hypotheses about what a student is thinking at a given moment (Ginsburg, 1997).Computational methods have not previously been applied in the analysis of clinical interview data. The closest area in which a significant amount of computational, work has been carried out is the design of intelligent tutoring systems (ITS; Graesser et al., 2004;Wade-Stein & Kintsch, 2004). The purpose of an ITS is...
When we learn how to throw darts we adjust how we throw based on where the darts stick. Much of skill learning is computationally similar in that we learn using feedback obtained after the completion of individual actions. We can formalize such tasks as a search problem; among the set of all possible actions, find the action that leads to the highest reward. In such cases our actions have two objectives: we want to best utilize what we already know (exploitation), but we also want to learn to be more successful in the future (exploration). Here we tested how participants learn movement trajectories where feedback is provided as a monetary reward that depends on the chosen trajectory. We mathematically derived the optimal search policy for our experiment using decision theory. The search behavior of participants is well predicted by an ideal searcher model that optimally combines exploration and exploitation.
Trust and reciprocity facilitate cooperation and are relevant to virtually all human interactions. They are typically studied using trust games: one subject gives (entrusts) money to another subject, which may return some of the proceeds (reciprocate). Currently, however, it is unclear whether trust and reciprocity in monetary transactions are similar in other settings, such as physical effort. Trust and reciprocity of physical effort are important as many everyday decisions imply an exchange of physical effort, and such exchange is central to labor relations. Here we studied a trust game based on physical effort and compared the results with those of a computationally equivalent monetary trust game. We found no significant difference between effort and money conditions in both the amount trusted and the quantity reciprocated. Moreover, there is a high positive correlation in subjects' behavior across conditions. This suggests that trust and reciprocity may be character traits: subjects that are trustful/trustworthy in monetary settings behave similarly during exchanges of physical effort. Our results validate the use of trust games to study exchanges in physical effort and to characterize inter-subject differences in trust and reciprocity, and also suggest a new behavioral paradigm to study these differences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.