2016
DOI: 10.1080/15377903.2016.1207738
|View full text |Cite
|
Sign up to set email alerts
|

Comparing Two CBM Maze Selection Tools: Considering Scoring and Interpretive Metrics for Universal Screening

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
5
0

Year Published

2017
2017
2018
2018

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 34 publications
1
5
0
Order By: Relevance
“…With regard to our second research question, observing no differences using general prediction metrics is consistent with our previous research (Ford et al., ; Ford et al., ), which found few such differences. In terms of our current study, sensitivity for AIMSweb was observed to be higher compared to DIBELS Next and FAST.…”
Section: Discussionsupporting
confidence: 92%
See 2 more Smart Citations
“…With regard to our second research question, observing no differences using general prediction metrics is consistent with our previous research (Ford et al., ; Ford et al., ), which found few such differences. In terms of our current study, sensitivity for AIMSweb was observed to be higher compared to DIBELS Next and FAST.…”
Section: Discussionsupporting
confidence: 92%
“…With regard to our first research question, observing no difference for students’ NWF between AIMSweb and DIBELS Next may seem unsurprising, as one would presume they measure the same construct (i.e., phonological awareness). However, our previous research found differences in students’ performance between AIMSweb and DIBELS Next for maze selection (Ford et al., ), as well as across AIMSweb, DIBELS Next, and FAST for OPR (Ford et al., ). Moreover, it should be noted the differences between students’ performance on the NWF tools for AIMSweb and DIBELS Next approached statistical significance, which may be identified with a larger sample of students.…”
Section: Discussionmentioning
confidence: 84%
See 1 more Smart Citation
“…Existing recommendations for specificity suggest that values above .70 are desirable (Kilgus, Methe, Maggin, & Tomasula, 2014). Nevertheless, some researchers advocate for balancing sensitivity and specificity when deriving cut-scores as to not place a higher emphasis on FNs or FPs because the cost of either result may vary across different school contexts (Baker et al, 2015; Ford, Missall, Hosp, & Kuhle, 2016; Nelson, Van Norman, & Lackner, 2016).…”
Section: Evaluating Screening Measuresmentioning
confidence: 99%
“…Since publication of Shin’s meta-analysis, the predictive validity of maze CBM in the middle grades has remained an interest of several researchers (Chung, Espin, & Stevenson, 2018; Conoyer et al, 2017; Ford, Missall, Hosp, & Kuhle, 2016; Muijselaar et al, 2017; Stevenson, 2017; Stevenson et al, 2016). As with ORF, Stevenson et al (2016) reported that the magnitude of maze CBM predictive validity coefficients weakened from .54 to .15 over the middle grades, a more dramatic deterioration than was observed in their ORF CBM data.…”
Section: Maze Cbmmentioning
confidence: 99%