Task solving processes and changes in these processes have long been expected to provide valuable information about children's performance in school. This article used electronic tangibles (concrete materials that can be physically manipulated) and a dynamic testing format (pretest, training, and posttest) to investigate children's task solving processes and changes in these processes as a result of training. We also evaluated the value of process information for the prediction of school results. Participants were N = 253 children with a mean age of 7.8 years. Half of them received a graduated prompts training; the other half received repeated practice only. Three process measures were used: grouping behaviour, verbalized strategies, and completion time. Different measures showed different effects of training, with verbalized strategies showing the largest difference on the posttest between trained and untrained children. Although process measures were related to performance on our dynamic task and to math and reading performance in school, the amount of help provided during training provided the most predictive value to school results. We concluded that children's task solving processes provide valuable information, but the interpretation requires more research.
The study investigated the value of process data obtained from a group-administered computerized dynamic test of analogical reasoning, consisting of a pretest-training-posttest design. We sought to evaluate the effects of training on processes and performance, and the relationships between process measures and performance on the dynamic test. Participants were N = 86 primary school children ( Mage = 8.11 years, SD = 0.63). The test consisted of constructed-response geometrical analogy items, requiring several actions to construct an answer. Process data enabled scoring of the total time, the time taken for initial planning of the task, the time taken for checking the answer that was provided, and variation in solving time. Training led to improved performance compared to repeated practice, but this improvement was not reflected in task-solving processes. Almost all process measures were related to performance, but the effects of training or repeated practice on this relationship differed widely between measures. In conclusion, the findings seemed to indicate that investigating process indicators within computerized dynamic testing of analogical reasoning ability provided information about children’s learning processes, but that not all processes were affected in the same way by training.
Dynamic testing aims to assess potential for learning by measuring how much a child can profit from a training procedure during the testing process. These procedures often include transfer tasks as a measure of the potential for learning, as the ability to transfer learned skills and knowledge is considered essential in successful learning. The aim of the current study was to investigate whether including a specific type of transfer task in a dynamic testing context, a so-called reversal procedure, would provide extra information on 6-7-year-old children's potential for learning. Moreover, it was investigated whether children's ability to transfer newly learned skills was dependent on their level of cognitive flexibility, as this executive function has previously been argued to be of significant importance in the transfer of academic skills. The results revealed that children's transfer abilities were indeed related to another measure of potential for learning, i.e. children's learner status. In addition, children's cognitive flexibility predicted greater transfer abilities and appeared to play a greater role for children who did not receive training or did not profit much from the training procedure. The results underline the importance of supporting children's cognitive flexibility when teaching for transfer.
This study evaluated a new measure for analyzing the process of children’s problem solving in a series completion task. This measure focused on a process that we entitled the Grouping of Answer Pieces (GAP) that was employed to provide information on problem representation and restructuring. The task was conducted using an electronic tangible interface, to allow for both natural manipulation of physical materials by the children, and computer monitoring of the process. The task was administered to 88 primary school children from grade 2 (M=8.2 years, SD=0.50). GAP was a moderate predictor of accuracy on the series completion task. Averaged over multiple items, GAP, verbalizations and time measures were related to accuracy. On an item level, however, GAP was the only process measure related to item solving success, and this relationship was mediated by item difficulty. Further research is needed to investigate the precise relationship between problem solving and GAP.
This study aimed to investigate children's potential for reasoning by analogy utilizing a newly‐developed computerized dynamic test, and the potential differential influence of executive functions (cognitive flexibility, attention, and planning) on static and dynamic measures of analogical reasoning. Participants included 64 children (mean age = 7.55). The study employed a two‐session experimental test‐training‐test design. Based on randomized blocking, half of the children received a graduated prompts training between pre‐test and post‐test, and the other half did not. Trained children improved more than control children in both their accuracy scores and number of accurately applied transformations from pre‐test to post‐test. It was further found that cognitive flexibility, attention and planning, is associated with successful solving of analogies. Training children in analogical reasoning seemed to reduce the effect of executive functions. It was also found that that children who were more cognitively flexible needed more prompts during the training.
The current study investigated whether a domain-specific intervention of ExeFun-Mat targeting math and executive functions in primary school children with a Roma background would be effective in improving their scholastic performance and executive functioning. ExeFun-Mat is based on the principles of the reciprocal teaching approach, scaffolding and self-questioning. The domain-specific content was divided into modules. Each module consisted of a set of graded tasks. The criteria for the grading and hierarchical organization of the tasks were based on the level of cognitive difficulty and the type of representation. In total, 122 students attending grade four of elementary school took part in the project. The study concerned a pretest-intervention-posttest experimental design with three conditions: the experimental condition, an active, and a passive control group. To assess the children’s level of EF, the Delis–Kaplan executive function system test battery was used; to assess children’s mathematical achievement, the cognitive abilities test (the numeracy battery), and ZAREKI—a neuropsychological test battery for numerical processing and calculation—were used. The results suggested that both math performance and executive functions improved over time, with no significant differences between the three conditions. An additional correlational analysis indicated that pretest performance was not related to posttest performance for the children in the experimental and active control group.
Proponents of dynamic testing have advocated its use as a replacement or addition to conventional tests. This research aimed to investigate the effects of using versus not using a pretest on both the outcome on the posttest and the processes used in solving inductive reasoning tasks in dynamic testing using a graduated prompts training. Sixty-seven 7- to 8-year-old children were assigned to either a group that received a pretest or a group that did not receive a pretest, using a randomized blocking procedure. No significant differences were found between both groups of children on posttest accuracy, process measures, number of hints needed during training, amount of time needed for testing, and the prediction of school related measures. This article concluded that the decision of whether or not a pretest is necessary should be based on the research question to be answered because it does not appear to influence posttest results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.