2019
DOI: 10.1037/spq0000274
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of schedule frequency and density when monitoring progress with curriculum-based measurement.

Abstract: School-based professionals often use curriculum-based measurement of reading (CBM-R) to monitor the progress of students with reading difficulties. Much of the extant CBM-R progress monitoring research has focused on its use for making group-level decisions, and less is known about using CBM-R to make decisions at the individual level. To inform the administration and use of CBM-R progress monitoring data, the current study evaluated the utility of 4 progress monitoring schedules that differed in frequency (on… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(15 citation statements)
references
References 18 publications
0
15
0
Order By: Relevance
“…On the other hand, in the context of progress monitoring, more frequent testing has been praised for increasing the precision of students' academic growth estimation by reducing the measurement error (Mellard et al, 2009;Christ et al, 2012). For example, January et al (2019) investigated the impact of testing frequency and the density of progress monitoring schedules on the accuracy of performance growth estimation in reading for second and fourth graders. The findings indicated that assessing students more frequently (e.g., twice a week rather than once a week) could significantly improve the confidence of accurately measuring students' academic growth.…”
Section: Test Optimization In Computerized Formative Assessmentsmentioning
confidence: 99%
See 1 more Smart Citation
“…On the other hand, in the context of progress monitoring, more frequent testing has been praised for increasing the precision of students' academic growth estimation by reducing the measurement error (Mellard et al, 2009;Christ et al, 2012). For example, January et al (2019) investigated the impact of testing frequency and the density of progress monitoring schedules on the accuracy of performance growth estimation in reading for second and fourth graders. The findings indicated that assessing students more frequently (e.g., twice a week rather than once a week) could significantly improve the confidence of accurately measuring students' academic growth.…”
Section: Test Optimization In Computerized Formative Assessmentsmentioning
confidence: 99%
“…More recently, a few studies have explored the possibility of utilizing big data in education to provide individualized recommendations for test administration in computerized formative assessments (Bulut et al, 2020). The studies focused on providing systematic help to the teachers, to increase the capacity to individually monitor, evaluate, and make decisions based on students' learning trajectory (January et al, 2019;Dede, 2016;Fischer et al, 2020).…”
Section: Test Optimization Using Big Data In Educationmentioning
confidence: 99%
“…However, this view fails to consider the effect of testing on the individual student (i.e., testing fatigue or burn-out) and the effect of the broader educational context (e.g., missing instructional time or wasted resources). Also, many researchers have noted the lack of consensus on the optimal number of test administrations or the testing frequency that should be used with computerized formative assessments (e.g., Nelson et al, 2017;January et al, 2018January et al, , 2019Van Norman and Ysseldyke, 2020). This is due, at least in part, to the fact that the optimal number and frequency of test administrations depends on many factors such as grade level, subject (e.g., reading or mathematics), the type of computerized assessment (e.g., adaptive or non-adaptive assessment), and individual students' response to instruction.…”
Section: Optimizing Test Administrationsmentioning
confidence: 99%
“…Currently, one of the major challenges for school-based professionals is determining the timing and frequency of test administrations. For some time, researchers have argued that frequent test administrations over a long period can be highly beneficial when making individuallevel decisions based on formative assessments (Christ et al, 2012;Thornblad and Christ, 2014;January et al, 2019). The potential issue with this approach, however, is that frequent testing (e.g., weekly or bi-weekly) diminishes the amount of instructional time that students receive, which may exacerbate some of their difficulties in further developing their academic skills.…”
Section: Introductionmentioning
confidence: 99%
“…Ideally, data should be collected at least once during every instructional session, although this does not mean that data need to be collected on every student response during that session (Test et al, 2017). However, evidence from progress-monitoring research on academic skills suggests that more frequent data collection allows educators to identify trends in data with greater precision, which may support more effective instructional decision making (January et al, 2019). In other words, it is important for educators to collect data frequently so that they can make timely decisions regarding the effectiveness of their instructional approaches.…”
Section: Data-driven Decision Making and Progress-monitoring Approachesmentioning
confidence: 99%