1974
DOI: 10.2307/1127957
|View full text |Cite
|
Sign up to set email alerts
|

Situational Effects on Observer Accuracy: Behavioral Predictability, Prior Experience, and Complexity of Coding Categories

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
22
0

Year Published

1979
1979
2012
2012

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 53 publications
(23 citation statements)
references
References 0 publications
1
22
0
Order By: Relevance
“…Few studies have examined methods for conducting initial observer training. Research has shown that recording unpredictable (rather than predictable) events during practice produces better generalization to novel situations (Mash & McElwee, 1974), that it takes longer to acquire competence in observing a larger (compared to a smaller) number of events (Bass, 1987), and that supervised (compared to unsupervised) practice tends to generate the recording of a larger number of events (Wildman, Erickson, & Kent, 1975). These studies provided useful information for structuring the content or supervision of practice sessions; however, they did not evaluate training methods per se.…”
mentioning
confidence: 99%
“…Few studies have examined methods for conducting initial observer training. Research has shown that recording unpredictable (rather than predictable) events during practice produces better generalization to novel situations (Mash & McElwee, 1974), that it takes longer to acquire competence in observing a larger (compared to a smaller) number of events (Bass, 1987), and that supervised (compared to unsupervised) practice tends to generate the recording of a larger number of events (Wildman, Erickson, & Kent, 1975). These studies provided useful information for structuring the content or supervision of practice sessions; however, they did not evaluate training methods per se.…”
mentioning
confidence: 99%
“…The quality of observational data is usually judged from interobserver agreement scores because of the difficulty in procuring a criterion against which to measure the observers' actual accuracy. Possible accuracy criteria, however, include mechanical measurements of behavior (e.g., Bechtel, 1967), mechanically generated re-479 1981, 141, 479-489 NUMBER 4 (WINTER 198 1) sponses (e.g., Repp, Roberts, Slack, Repp, & Berkler, 1976), recorded behaviors orchestrated by a predetermined script (e.g., Mash & McElwee, 1974), and consensually validated criterion protocols produced by the observation of multiple observers (e.g., Kent et al, 1974;Foster & Cone, 1980). Although agreement is generally used to evaluate the quality of observational data, agreement and accuracy are not the same (Foster & Cone, 1980;Johnson & Bolstad, 1973;Kazdin, 1977).…”
mentioning
confidence: 99%
“…Gowland and coworkers (1995) also reported that the complexity of the GMPM scoring system was an explanation for the differences in scoring seen between evaluators. Interobserver reliability has been found to be influenced by the complexity of the coding system (Mash and McElwee 1974). Complexity refers to the number of different response categories of an observational coding system and the number of different behaviors that are scored within a particular observational system (Mash andMcElwee 1974, Kazdin 1977).…”
Section: Reliabilitymentioning
confidence: 99%
“…Interobserver reliability has been found to be influenced by the complexity of the coding system (Mash and McElwee 1974). Complexity refers to the number of different response categories of an observational coding system and the number of different behaviors that are scored within a particular observational system (Mash andMcElwee 1974, Kazdin 1977). Systems with more categories and behaviors are more complex than systems with fewer categories, thus making reliability more difficult.…”
Section: Reliabilitymentioning
confidence: 99%