2016
DOI: 10.1017/s1754470x15000288
|View full text |Cite
|
Sign up to set email alerts
|

The challenge of training supervisors to use direct assessments of clinical competence in CBT consistently: a systematic review and exploratory training study

Abstract: Evaluating and enhancing supervisee competence is a key function of supervision and can be aided by the use of direct assessments of clinical competence, e.g. the Cognitive Therapy Scale – Revised (CTS-R). We aimed to review the literature regarding inter-rater reliability and training on the CTS and CTS-R to present exploratory data on training raters to use this measure. We employed a systematic review. An exploratory study evaluated the outcomes of a CTS-R supervisor training workshop (n = 34), including se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
9
1

Year Published

2016
2016
2021
2021

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(11 citation statements)
references
References 54 publications
1
9
1
Order By: Relevance
“…Generally, the research on mental health practitioners' self-assessment highlights discrepancies in the degree of concordance between self-assessments and external raters (e.g., supervisors; Creed et al, 2016;Walfish, McAlister, O'donnell, & Lambert, 2012;Waltman, Frankel, & Williston, 2016), which is commonly used as a proxy for self- assessment accuracy (Neimeyer, Taylor, Rozensky, & Cox, 2014), and suggests that practitioners tend to both over-and under-estimate their level of competence (Loades & Myles, 2016). However, the literature is limited in the extent to which it can generalise to clinical psychology trainees (Creed et al, 2016;Loades & Armstrong, 2016). To start, research primarily involves practitioners other than psychologists, who have several years of clinical experience outside their training (e.g., nursing; Belar et al, 2001;McManus et al, 2012;Walfish et al, 2012), which makes it difficult to establish the effects of early clinical experience on psychology trainees' ability to self-assess (Esposito et al, 2015;Halonen et al, 2003).…”
Section: Effects Of Clinical Experience On Selfassessmentmentioning
confidence: 99%
“…Generally, the research on mental health practitioners' self-assessment highlights discrepancies in the degree of concordance between self-assessments and external raters (e.g., supervisors; Creed et al, 2016;Walfish, McAlister, O'donnell, & Lambert, 2012;Waltman, Frankel, & Williston, 2016), which is commonly used as a proxy for self- assessment accuracy (Neimeyer, Taylor, Rozensky, & Cox, 2014), and suggests that practitioners tend to both over-and under-estimate their level of competence (Loades & Myles, 2016). However, the literature is limited in the extent to which it can generalise to clinical psychology trainees (Creed et al, 2016;Loades & Armstrong, 2016). To start, research primarily involves practitioners other than psychologists, who have several years of clinical experience outside their training (e.g., nursing; Belar et al, 2001;McManus et al, 2012;Walfish et al, 2012), which makes it difficult to establish the effects of early clinical experience on psychology trainees' ability to self-assess (Esposito et al, 2015;Halonen et al, 2003).…”
Section: Effects Of Clinical Experience On Selfassessmentmentioning
confidence: 99%
“…Paper 9: Training supervisors to use direct assessments of clinical competence in CBT. Loades & Armstrong (2016) examine another training-related challenge: establishing inter-rater reliability amongst supervisors who assess therapist competence (i.e. applying the CTS and its revised version, the CTS-R).…”
Section: Summary and Discussion Of The 10 Papers In This Special Issuementioning
confidence: 99%
“…They then participated in three 3-hour Short-SAGE workshops, in which three randomly selected supervision sessions were analysed and discussed in order to promote a common understanding of the instrument and to reach scoring consensus. The training outline was based on the Short-SAGE manual, and the description of Loades and Armstrong (2016). In each workshop, the coders listened to a recorded supervision session and then discussed the ratings of each item until the rationale was clarified and consensus was reached.…”
Section: Designmentioning
confidence: 99%