1993
DOI: 10.1111/j.1744-6570.1993.tb00887.x
|View full text |Cite
|
Sign up to set email alerts
|

Beyond Formal Experimental Design: Towards an Expanded View of the Training Evaluation Process

Abstract: Textbook treatments of training evaluation typically equate evaluation with the measurement of change and focus on formal experimental design as the mechanism for controlling threats to the inference that the training intervention produced whatever change was observed. This paper notes that two separate questions may be of interest: How much change has occurred? and. Has a target performance level been reached? We show that the evaluation mechanisms needed to answer the two types of questions are markedly diff… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
85
0
1

Year Published

1997
1997
2018
2018

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 115 publications
(90 citation statements)
references
References 1 publication
4
85
0
1
Order By: Relevance
“…For instance, had we split the sample size in two to create a control group, we would have ruled out the ambiguity regarding potential alternative explanations, but in terms of power, "the true experimental design [would have been] strikingly inadequate" (Sackett & Mullen, 1993, p. 624). This is because the power of the experiment would have gone down from .98 as present in this study (for N = 80 [87 in our study] d = .5, r xy = .3) to .59 (for n e = 40 and n c = 40 [43.5 in our study], d = .5, r xy = .3), a 40% reduction in probability of correctly rejecting the null hypothesis when it is false (Arvey & Cole, 1989;Arvey et al, 1985;Cohen, 1988;Sackett & Mullen, 1993).…”
Section: The Methods Of Analysismentioning
confidence: 76%
See 1 more Smart Citation
“…For instance, had we split the sample size in two to create a control group, we would have ruled out the ambiguity regarding potential alternative explanations, but in terms of power, "the true experimental design [would have been] strikingly inadequate" (Sackett & Mullen, 1993, p. 624). This is because the power of the experiment would have gone down from .98 as present in this study (for N = 80 [87 in our study] d = .5, r xy = .3) to .59 (for n e = 40 and n c = 40 [43.5 in our study], d = .5, r xy = .3), a 40% reduction in probability of correctly rejecting the null hypothesis when it is false (Arvey & Cole, 1989;Arvey et al, 1985;Cohen, 1988;Sackett & Mullen, 1993).…”
Section: The Methods Of Analysismentioning
confidence: 76%
“…As in much previous research on training evaluations (e.g., Arvey, Cole, Hazucha, & Hartanto, 1985;Sackett & Mullen, 1993), the constraint of not including a control group in our study was the population of only N = 87 available in this organization. Sackett and Mullen (1993, p. 624) explicate this situation by noting that "… we see no ready mechanism for combating the low statistical power of the true experimental design in the setting where N is constrained.…”
Section: The Methods Of Analysismentioning
confidence: 98%
“…These include the difficulties of meeting conventional scientific requirements of internal and external validity (Cook & Campbell, 1979;Sackett & Mullen, 1993). Furthermore, practitioner involvement may compromise the independence and objectivity of the academic researcher (Beyer & Trice, 1982;Grey, 2001;Hackman, 1985), and participating organizations may view the research findings as proprietary and, thus, not available for dissemination in the public domain (Lawler et al, 1985).…”
Section: Design the Research Project To Be A Collaborative Learning Cmentioning
confidence: 99%
“…However, such precautions are not foolproof. For instance, Sackett and Mullen (1993) assert that "a pre-experimental [generally uninterpretable] design, paired with careful investigation into the plausibility of various threats, is still better than no evaluation at all, given that organizations must make decisions about future training efforts with or without evaluation data" (p. 621). Perhaps the real ethical issue is not whether an experimental evaluation design is preferable to a preexperimental one, but whether or not one chooses to base decisions about HRD programs on systematically gathered information.…”
Section: Resultsmentioning
confidence: 99%