We examined developmental and individual differences in 6th and 8th graders’ fraction arithmetic and overall mathematics achievement and related them to differences in understanding of fraction magnitudes, whole number division, executive functioning, and metacognitive judgments within a cross sectional design. Results indicated that the difference between low achieving and higher achieving children’s fraction arithmetic knowledge, already substantial in 6th grade, was much greater in 8th grade. The fraction arithmetic knowledge of low achieving children was similar in the two grades, whereas higher achieving children showed much greater knowledge in 8th than 6th grade, despite both groups having been in the same classrooms, using the same textbooks, and having the same teachers and classmates. Individual differences in both fraction arithmetic and mathematics achievement test scores were predicted by differences in fraction magnitude knowledge and whole number division, even after the contributions of reading achievement and executive functioning were statistically controlled. Instructional implications of the findings are discussed.
Many children fail to master fraction arithmetic even after years of instruction, a failure that hinders their learning of more advanced mathematics as well as their occupational success. To test hypotheses about why children have so many difficulties in this area, we created a computational model of fraction arithmetic learning and presented it with the problems from a widely used textbook series. The simulation generated many phenomena of children's fraction arithmetic performance through a small number of common learning mechanisms operating on a biased input set. The biases were not unique to this textbook series-they were present in 2 other textbook series as well-nor were the phenomena unique to a particular sample of children-they were present in another sample as well. Among other phenomena, the model predicted the high difficulty of fraction division, variable strategy use by individual children and on individual problems, relative frequencies of different types of strategy errors on different types of problems, and variable effects of denominator equality on the four arithmetic operations. The model also generated nonintuitive predictions regarding the relative difficulties of several types of problems and the potential effectiveness of a novel instructional approach. Perhaps the most general lesson of the findings is that the statistical distribution of problems that learners encounter can influence mathematics learning in powerful and nonintuitive ways. (PsycINFO Database Record
We describe the DSHM (Dynamically Structured Holographic Memory) model of human memory, which uses high dimensional vectors to represent items in memory. The complexity and intelligence of human behavior can be attributed, in part, to our ability to utilize vast knowledge acquired over a lifetime of experience with our environment. Thus models of memory, particularly models that can scale up to lifetime learning, are critical to modeling human intelligence. DHSM is based on the BEAGLE model of language acquisition (Jones and Mewhort, 2007) and extends this type of model to general memory phenomena. We demonstrate that DHSM can model a wide variety of human memory effects. Specifically, we model the fan effect, the problem size effect (from math cognition), dynamic game playing (detecting sequential dependencies from memories of past moves), and time delay learning (using an instance based approach). This work suggests that DSHM is suitable as a basis for learning both over the short-term and over the lifetime of the agent, and as a basis for both procedural and declarative memory. We argue that cognition needs to be understood at both the symbolic and sub-symbolic levels, and demonstrate that DSHM intrinsically operates at both of these levels of description. In order to situate DSHM in a familiar context, we discuss the relationship between DHSM and ACT-R. Dynamically Structured Holographic Memory 3 Dynamically Structured Holographic Memory Cognitive science, as a discipline, provides explanations for why cognitive phenomena occur and how they occur. The explanation for how a phenomenon occurs often involves a description of what processes underlie it. This need for process level accounts makes modeling, which is explicit in mechanical details, a particularly useful tool in generating explanations for cognitive phenomena. To achieve a full, theoretical understanding of a cognitive process, explanations need to be provided at both symbolic (i.e., representational) and sub-symbolic levels of description. The classic symbolic approaches to modeling do not account for how the symbol manipulations described in the model could arise from neural tissue, nor do they account for how the symbols themselves come into existence. Classic connectionist approaches are more concerned with neural plausibility, but are notoriously opaque, doing little to aid our understanding of the cognitive processes modeled. By contrast, the vector-symbolic approach to modeling explicitly provides an account at both levels of description. Vector Symbolic Architectures (VSAs), a term coined by Gayler (2003), are a set of techniques for instantiating and manipulating symbolic structures in distributed representations. Research into VSAs has been motivated by limitations in the ability of traditional connectionist models (i.e., non-recurrent models with one or two layers of connections) to represent knowledge with complicated structure (Plate, 1995). Like human memory, vector symbolic architectures can store complicated and recursive relations be...
Why is subsequent recall sometimes better for self-generated answers than for answers obtained from an extemal source (e.g., calculator)? In this study, we explore the relative contribution of 2 processes, recall attempts and self-computation, to this generation effect (i.e., enhanced answer recall relative to when problems are practiced with a calculator). Adults (N = 36) practiced unfamiliar alphabet arithmetic problems (A + 4 = ?; answer E), in 3 leaming conditions: self-generating answers through recall or counting (self-generate leaming), obtaining answers with a customized calculator (calculator-only leaming), or using a calculator after first attempting answer recall (retrieve-else-calculator leaming). Subsequently, participants were tested without a calculator. Retrieve-else-calculator leaming was expected to produce an intermediate level of performance because it captures the benefits of recall attempts but excludes any benefits of the other self-generation process (mental computation). However, retrieve-else-calculator leaming proved as effective as selfgeneration leaming (latency, accuracy, and recall rates on test), and both led to a test performance that was superior to calculator-only leaming. We suggest that mental generation processes that involve the retrieval of intermediate products may introduce retrieval interference that precludes a strong contribution to the generation effect. However, tedious mental computation may contribute to superior access indirectly, by inciting automatic recall attempts. This effect can be mimicked by imposing a delay prior to calculator use. Our results are compatible with the view that leaming contexts that promote recall activities produce a customized type of knowledge to support cued-recall (Rickard & Bajic, 2006).
To advance cognitive theory, researchers must be able to parse the performance of a task into its significant mental stages. In this article, we describe a new method that uses functional MRI brain activation to identify when participants are engaged in different cognitive stages on individual trials. The method combines multivoxel pattern analysis to identify cognitive stages and hidden semi-Markov models to identify their durations. This method, applied to a problem-solving task, identified four distinct stages: encoding, planning, solving, and responding. We examined whether these stages corresponded to their ascribed functions by testing whether they are affected by appropriate factors. Planning-stage duration increased as the method for solving the problem became less obvious, whereas solving-stage duration increased as the number of calculations to produce the answer increased. Responding-stage duration increased with the difficulty of the motor actions required to produce the answer.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.