2016
DOI: 10.1177/0145445516644699
|View full text |Cite
|
Sign up to set email alerts
|

Interrater Agreement on the Visual Analysis of Individual Tiers and Functional Relations in Multiple Baseline Designs

Abstract: Previous research on visual analysis has reported low levels of interrater agreement. However, many of these studies have methodological limitations (e.g., use of AB designs, undefined judgment task) that may have negatively influenced agreement. Our primary purpose was to evaluate whether agreement would be higher than previously reported if we addressed these weaknesses. Our secondary purposes were to investigate agreement at the tier level (i.e., the AB comparison) and at the functional relation level in mu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
50
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 50 publications
(55 citation statements)
references
References 20 publications
0
50
0
Order By: Relevance
“…Another benefit of including nonparametric statistical judgment aids in behavior analytic research is that it may promote wider acceptance of applied behavior analysis across other scientific disciplines (e.g., education, psychology, and neuroscience) where researchers may not be familiar with visual analysis. Although previous research has found that visual analysis of single‐subject graphs can be reliable (e.g., Diller, Barry, & Gelino, ; Kahng et al, ; Wolfe, Seaman, & Drasgow, ), the same research has observed a reduction in visual analysis reliability as a function of decreased rater experience and increased data complexity (Diller et al, ). In this connection, the supplemental information generated by ANSA, combined with standardized guidelines, can support the data interpretation process for behavior analytic students and practitioners who are gaining fluency in visual analysis.…”
Section: Discussionmentioning
confidence: 94%
“…Another benefit of including nonparametric statistical judgment aids in behavior analytic research is that it may promote wider acceptance of applied behavior analysis across other scientific disciplines (e.g., education, psychology, and neuroscience) where researchers may not be familiar with visual analysis. Although previous research has found that visual analysis of single‐subject graphs can be reliable (e.g., Diller, Barry, & Gelino, ; Kahng et al, ; Wolfe, Seaman, & Drasgow, ), the same research has observed a reduction in visual analysis reliability as a function of decreased rater experience and increased data complexity (Diller et al, ). In this connection, the supplemental information generated by ANSA, combined with standardized guidelines, can support the data interpretation process for behavior analytic students and practitioners who are gaining fluency in visual analysis.…”
Section: Discussionmentioning
confidence: 94%
“…Unlike other fields (e.g., psychology) that rely on statistical comparisons and the standardization of collected data, behavior analysts rely almost exclusively on the visual inspection of summarized and plotted data over time. An extensive line of research has suggested that teaching visual analysis of graphs using standard didactic approaches and traditional supervisory methods might not lead to the development of fluent and reliable visual analytic repertoires in practicing behavior analysts (Danov & Symons, ; DeProspero & Cohen, ; Diller et al, ; Ninci, Vannest, Willson, & Zhang, ; Ottenbacher, ; Wolfe et al, ). However, a few studies have shown that when participants are directly taught specific visual analysis rules, agreement across participants improved (Hagopian et al, ; Roane, Fisher, Kelley, Mevers, & Bouxsein, ).…”
Section: Introductionmentioning
confidence: 99%
“…This skill is necessary to determine whether interventions are effective, and in some cases, to identify the specific components of interventions that are controlling the target behavior(s). However, few studies have investigated different instructional methods for teaching students and practitioners to reliably visually analyze graphs to determine functional relations between independent and dependent variables in more common behavior analytic experimental designs (Wolfe et al, ).…”
Section: Introductionmentioning
confidence: 99%
“…Single-case research designs differ from group experimental designs in several notable ways: (a) the unit of analysis is at the level of the individual participant rather than between groups, (b) although randomization can be used in any single-case research design, it has not been evenly applied across researchers and single-case research is traditionally a response guided approach, (c) the use of visual analysis of line graphs to determine the strength of the functional relation between the independent variable and the dependent variable, and ultimately to reject the researcher's null hypothesis rather than the use of parametric statistics; and (d) external validity or specifically generalizability is built by systematic and direct replication of intervention methods across participants, studies, and researchers. 12 Several long-standing criticisms have arisen from these methodological and conceptual differences including (a) the low interrater reliability of visual analysis in single-case research, particularly when conducted by novice raters, 13,14 (b) the autocorrelation of single-case data, 15 and the related issues of lack of randomization and the use of a response guided approach, 16 (c) the superiority of group experimental design to minimize threats to internal validity, 17 and (d) publication bias toward large treatment effects. 18,19 Additional unfounded criticism has come from confusion about the differences between single-case research design and nonexperimental AB case series.…”
Section: Special Issue On Advances In Single-case Research Design Andmentioning
confidence: 99%