2016
DOI: 10.1002/jaba.325
|View full text |Cite
|
Sign up to set email alerts
|

Visual analysis of data in a multielement design

Abstract: Ninety Board Certified Behavior Analysts (BCBAs) and 19 editorial board members evaluated hypothetical data presented in a multielement design. We manipulated the variability, trend, and mean shift of the data and asked the participants to determine if the data demonstrated experimental control. The results showed that variability, trend, and mean shift interacted to affect the participants' ratings of experimental control. The level of agreement between participants was variable, but was generally lower than … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
31
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(31 citation statements)
references
References 6 publications
(21 reference statements)
0
31
0
Order By: Relevance
“…Another benefit of including nonparametric statistical judgment aids in behavior analytic research is that it may promote wider acceptance of applied behavior analysis across other scientific disciplines (e.g., education, psychology, and neuroscience) where researchers may not be familiar with visual analysis. Although previous research has found that visual analysis of single‐subject graphs can be reliable (e.g., Diller, Barry, & Gelino, ; Kahng et al, ; Wolfe, Seaman, & Drasgow, ), the same research has observed a reduction in visual analysis reliability as a function of decreased rater experience and increased data complexity (Diller et al, ). In this connection, the supplemental information generated by ANSA, combined with standardized guidelines, can support the data interpretation process for behavior analytic students and practitioners who are gaining fluency in visual analysis.…”
Section: Discussionmentioning
confidence: 94%
“…Another benefit of including nonparametric statistical judgment aids in behavior analytic research is that it may promote wider acceptance of applied behavior analysis across other scientific disciplines (e.g., education, psychology, and neuroscience) where researchers may not be familiar with visual analysis. Although previous research has found that visual analysis of single‐subject graphs can be reliable (e.g., Diller, Barry, & Gelino, ; Kahng et al, ; Wolfe, Seaman, & Drasgow, ), the same research has observed a reduction in visual analysis reliability as a function of decreased rater experience and increased data complexity (Diller et al, ). In this connection, the supplemental information generated by ANSA, combined with standardized guidelines, can support the data interpretation process for behavior analytic students and practitioners who are gaining fluency in visual analysis.…”
Section: Discussionmentioning
confidence: 94%
“…Unlike other fields (e.g., psychology) that rely on statistical comparisons and the standardization of collected data, behavior analysts rely almost exclusively on the visual inspection of summarized and plotted data over time. An extensive line of research has suggested that teaching visual analysis of graphs using standard didactic approaches and traditional supervisory methods might not lead to the development of fluent and reliable visual analytic repertoires in practicing behavior analysts (Danov & Symons, ; DeProspero & Cohen, ; Diller et al, ; Ninci, Vannest, Willson, & Zhang, ; Ottenbacher, ; Wolfe et al, ). However, a few studies have shown that when participants are directly taught specific visual analysis rules, agreement across participants improved (Hagopian et al, ; Roane, Fisher, Kelley, Mevers, & Bouxsein, ).…”
Section: Introductionmentioning
confidence: 99%
“…First, invoking visual inspection is not invariably straightforward. Often, naïve and well‐experienced researchers in behavior analysis do not reliably invoke visual inspection criteria or selectively invoke some criteria for their conclusions but ignore others (e.g., Diller et al, 2016; Ninci et al, 2015; Normand & Bailey, 2006; Wolfe et al, 2016). In addition, the different criteria, such as those provided in Table 2, do not always lead to the same conclusion.…”
Section: Single‐case Experimental Designsmentioning
confidence: 99%