2006
DOI: 10.1037/0021-9010.91.6.1276
|View full text |Cite
|
Sign up to set email alerts
|

An examination of learning processes during critical incident training: Implications for the development of adaptable trainees.

Abstract: Three experiments are reported that examined the process by which trainees learn decision-making skills during a critical incident training program. Formal theories of category learning were used to identify two processes that may be responsible for the acquisition of decision-making skills: rule learning and exemplar learning. Experiments 1 and 2 used the process dissociation procedure (L. L. Jacoby, 1998) to evaluate the contribution of these processes to performance. The results suggest that trainees used a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 61 publications
0
9
0
Order By: Relevance
“…They found that adaptive guidance during training was related to more appropriate study and practice strategies, more on‐task cognition, and higher self‐efficacy, all of which influenced task knowledge and skill development, which ultimately positively predicted adaptive transfer performance. Surprisingly, Neal et al () found that providing examples of factors that could change the application of decision rules had a negative effect on decision accuracy in adaptive far‐transfer trials.…”
Section: Distal Predictorsmentioning
confidence: 99%
“…They found that adaptive guidance during training was related to more appropriate study and practice strategies, more on‐task cognition, and higher self‐efficacy, all of which influenced task knowledge and skill development, which ultimately positively predicted adaptive transfer performance. Surprisingly, Neal et al () found that providing examples of factors that could change the application of decision rules had a negative effect on decision accuracy in adaptive far‐transfer trials.…”
Section: Distal Predictorsmentioning
confidence: 99%
“…Out of 11 empirical articles in this issue, 9 included a ''limitations'' section, and in these sections were 11 apologies for study design limitations, 3 apologies for measures' shortcomings, 1 apology for a small effect size, 5 apologies for using self-report data, and 5 apologies for sample characteristics. Of these latter apologies, authors apologized for using students (Price, Harrison, & Gavin, 2006), teachers (Trevor & Wazeter, 2006), firefighters (Neal et al, 2006), engineers (Joireman, Kamdar, Daniels, & Duell, 2006), and managers (Morgeson & Humphrey, 2006) as research participants (what kind of sample does one not have to apologize for?). Highhouse and Gillespie's analysis of sample generalizability issues suggests that despite the repeated apologies for sample characteristics that are seen in empirical studies (and, we suspect, reviewer comments that provoke these apologies), ''it is rare in applied behavioral science for the nature of the sample to be an important consideration for generalizability'' (p. 250).…”
Section: Have You Heard This Story?mentioning
confidence: 99%
“…Effect sizes in within‐subjects designs represent the differences between measurements of criteria taken before and after the coaching has taken place (with varying duration of time between measurements depending on the number and schedule of coaching sessions). An alternative design is the between‐subjects design (e.g., Ayres & Malouff, ; Holladay & Quiñones, ; Neal et al ., ; Orvis, Fisher, & Wasserman, ). In these studies, effect sizes represent the differences between control and experimental (i.e., coaching) groups measured after the coaching has taken place.…”
Section: Introductionmentioning
confidence: 99%