and an anonymous reviewer for comments on an earlier version of this article. Special thanks to Julie Bush for her considered reading of an initial draft.
The research examined whether corrected misinformation influences anaphoric inferences people make during subsequent reading. Participants read a set of corrected-misinformation and no-misinformation stories and made judgments about probe words that were either appropriate or inappropriate anaphoric referents. At a short delay, the results showed less activation for appropriate referents that were corrections of misinformation. At longer delays, the activation of appropriate referents showed no significant difference, but misinformation probes were more quickly recognized than were inappropriate referents that were incidentally mentioned in control story versions. In all conditions, appropriate referents showed more activation than inappropriate ones. The results suggest that corrected misinformation can continue to influence on-line reading processes.
Access to prior cases in memory is a central issue in analogical reasoning. Previous research accounts for access in terms of overall similarity between complete new exemplars compared to complete stored instances and stresses the relative importance of surface-level similarities in access to complete cases (Gentner & Landers, 1985; Rattermann & Gentner, 1987). However, for cross-domain remindings, abstract similarities capture the important commonalities between cases (&hank, 1982; Seifert, McKoon, Abelson, & Ratcliff, 1986). Therefore, models of analogy must account for structural-level remindings when they do occur in terms of abstract similarities. In planning and problem-solving tasks, a stored exemplar may be more useful if accessed before the new pattern is complete, when past experience can bring to bear possible solutions or warn of potential dangers while the outcome is yet undetermined. Further, different partial sets of abstract features may result in differing access to analogous cases. Features that predict when prior cases might be useful to problem solving could serve as better retrieval cues than other abstract cues that are equally similar, yet less distinctive to the specific problem situation. To test these hypotheses, several experiments were conducted using thematic stories in a modification of the reminding paradigm developed by Gentner and Landers (1985). By examining the relative effectiveness of subsets of features in accessing relevant cases, it was found that a subset of abstract cue features predicting when a planning failure might occur led to more reliable access to complete prior analogies than did a subset of abstract features expressing specific information about planning decisions and outcomes. Further experiments show that how distinctly the feature sets characterize the conditions leading up to the planning decision point, and not differences in the overall similarity to the case, determines access based on abstract cues.
Abstract. Interest in psychological experimentation from the Artificial Intelligence community often takes the form of rigorous post-hoc evaluation of completed computer models. Through an example of our own collaborative research, we advocate a different view of how psychology and AI may be mutually relevant, and propose an integrated approach to the study of learning in humans and machines. We begin with the problem of learning appropriate indices for storing and retrieving information from memory. From a planning task perspective, the most useful indices may be those that predict potential problems and access relevant plans in memory, improving the planner's ability to predict and avoid planning failures. This "predictive features" hypothesis is then supported as a psychological claim, with results showing that such features offer an advantage in terms of the selectivity of reminding because they more distinctively characterize planning situations where differing plans are appropriate.We present a specific case-based model of plan execution, RUNNER, along with its indices for recognizing when to select particular plans--appropriateness conditions--and how these predictive indices serve to enhance learning. We then discuss how this predictive features claim as implemented in the RUNNER model is then tested in a second set of psychological studies. The results show that learning appropriateness conditions results in greater success in recognizing when a past plan is in fact relevant in current processing, and produces more reliable recall of the related information. This form of collaboration has resulted in a unique integration of computational and empirical efforts to create a model of case-based learning.
Interest in psychological experimentation from the Artificial Intelligence community often takes the form of rigorous post-hoc evaluation of completed computer models. Through an example of our own collaborative research, we advocate a different view of how psychology and AI may be mutually relevant, and propose an integrated approach to the study of learning in humans and machines. We begin with the problem of learning appropriate indices for storing and retrieving information from memory. From a planning task perspective, the most useful indices may be those that predict potential problems and access relevant plans in memory, improving the planner's ability to predict and avoid planning failures. This "predictive features" hypothesis is then supported as a psychological claim, with results showing that such features offer an advantage in terms of the selectivity of reminding because they more distinctively characterize planning situations where differing plans are appropriate. We present a specific case-based model of plan execution, RUNNER, along with its indices for recognizing when to select particular plans-appropriateness conditions-and how these predictive indices serve to enhance learning. We then discuss how this predictive features claim as implemented in the RUNNER model is then tested in a second set of psychological studies. The results show that learning appropriateness conditions results in greater success in recognizing when a past plan is in fact relevant in current processing, and produces more reliable recall of the related information. This form of collaboration has resulted in a unique integration of computational and empirical efforts to create a model of case-based learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.