2023
DOI: 10.1002/jaba.980
|View full text |Cite
|
Sign up to set email alerts
|

The influence of data characteristics on interrater agreement among visual analysts

Abstract: Visual analysis is the primary method of analyzing single‐case research data, yet relatively little is known about the variables that influence raters' decisions and rater agreement. Previous research has suggested that trend, variability, and autocorrelation may negatively affect interrater agreement, but studies have been limited by small numbers of graphs and participants whose knowledge of single‐case research was not described. The purpose of this study was to examine the main and interaction effects of t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 39 publications
(76 reference statements)
1
3
0
Order By: Relevance
“…This finding is congruent with the outcomes of Dart and Radley (2017), which showed that errors of commission were least likely when null effects were displayed. Similarly, these results mirror those of Wolfe and Seaman (2023), who found that graphs demonstrating an effect size of 0 or a large effect resulted in higher levels of interrater agreement. This is also consistent with the findings of Ninci et al (2015), who found that graphs showing small or moderate effects had lower levels of interrater agreement than those with larger or no effects.…”
Section: Discussionsupporting
confidence: 82%
See 3 more Smart Citations
“…This finding is congruent with the outcomes of Dart and Radley (2017), which showed that errors of commission were least likely when null effects were displayed. Similarly, these results mirror those of Wolfe and Seaman (2023), who found that graphs demonstrating an effect size of 0 or a large effect resulted in higher levels of interrater agreement. This is also consistent with the findings of Ninci et al (2015), who found that graphs showing small or moderate effects had lower levels of interrater agreement than those with larger or no effects.…”
Section: Discussionsupporting
confidence: 82%
“…Differences between effect size calculations and ratings of effects through visual analysis may arise due to considerations of changes in trend and variability between phases or overlap between phases, which experts strongly agreed were influential in their analysis of a single-case design graph. Wolfe and Seaman (2023) found that trend, effect size, and variability were influential in ratings of AB graphs; it is possible that these characteristics impacted raters’ judgments of graphs inconsistently in the present study. Furthermore, there is lack of consensus on the number of main effects needed to determine a functional relation—some, but not all, indicate that there should be three effects demonstrated (Wolfe & Seaman, 2023).…”
Section: Discussionmentioning
confidence: 53%
See 2 more Smart Citations