2022
DOI: 10.1097/iyc.0000000000000223
|View full text |Cite
|
Sign up to set email alerts
|

Comparing and Contrasting Quality Frameworks Using Research on High-Probability Requests With Young Children

Abstract: The purpose of this study was to compare and contrast frameworks for evaluating methodological rigor in single case research. Specifically, research on high-probability requests to increase compliance in young children was evaluated. Ten studies were identified and were coded using 4 frameworks. These frameworks were the Council for Exceptional Children Standards for Evidence-based Practices, What Works Clearinghouse, Risk of Bias Assessment for Single Subject Experimental Designs, and Single Case Analysis and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 39 publications
0
3
0
Order By: Relevance
“…Finally, SCED does not provide as strong of evidence as group design, and though Reichow (2008) has been widely used in the literature and has good reliability, the use of different evaluative frameworks (e.g., CEC, WWC, SCARF, etc. ), may provide different determinations of the strength of the research included (Hardy et al, 2022). Furthermore, more studies must be conducted to conduct a meta-analysis of different intervention variables (e.g., dosage, training package) to determine the most effective components of caregiver-implemented interventions and training protocols.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, SCED does not provide as strong of evidence as group design, and though Reichow (2008) has been widely used in the literature and has good reliability, the use of different evaluative frameworks (e.g., CEC, WWC, SCARF, etc. ), may provide different determinations of the strength of the research included (Hardy et al, 2022). Furthermore, more studies must be conducted to conduct a meta-analysis of different intervention variables (e.g., dosage, training package) to determine the most effective components of caregiver-implemented interventions and training protocols.…”
Section: Discussionmentioning
confidence: 99%
“…The SCARF was used to evaluate the quality and rigor of each SCD. SCARF has been used in over 13 published syntheses to evaluate SCD quality and rigor, most notably with behaviorally based literature such as NC (e.g., Hardy et al, 2022; Trump et al, 2020). The unit of analysis is the design, rather than the article, allowing for more sensitive evaluations of SCD quality and rigor when compared to other tools (e.g., What Works Clearinghouse; Council for Exceptional Children design standards; Hardy et al, 2022; Zimmerman et al, 2018).…”
Section: Methodsmentioning
confidence: 99%
“…Due to its hierarchical structure and weighting of indicators, the SCARF 1.0 was included in the analysis of reviewed studies using a single-case design. Other systematic reviews have included multiple QIs to compare results and expand analysis of varying components (e.g., Hardy et al, 2022; Zimmerman et al, 2018); as the CEC and SCARF weighted varying components differently, we included both tools in our analysis of the included studies’ methodological rigor. The following research questions guided the study:…”
mentioning
confidence: 99%