2011
DOI: 10.1176/appi.ps.62.6.670
|View full text |Cite
|
Sign up to set email alerts
|

A Comparison of Phone-Based and On-Site Assessment of Fidelity for Assertive Community Treatment in Indiana

Abstract: Objective This study investigated the reliability, validity, and role of rater expertise in phone-administered fidelity assessment instrument based on the Dartmouth Assertive Community Treatment Scale (DACTS). Methods An experienced rater paired with a research assistant without fidelity assessment experience or a consultant familiar with the treatment site conducted phone based assessments of 23 teams providing assertive community treatment in Indiana. Using the DACTS, consultants conducted on-site evaluati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
10
2

Year Published

2012
2012
2021
2021

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 7 publications
(12 citation statements)
references
References 11 publications
(12 reference statements)
0
10
2
Order By: Relevance
“…ICCs were between .84 (Services subscale) and .96 (Human Resources subscale) for subscales and .91 for total DACTS, all slightly higher than a previous study (McGrew et al, 2011), with the 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 exception of the Services subscale. Mean absolute differences between phone and on-site scores also showed close consensus: .18 or less for all subscales and total DACTS.…”
Section: Discussioncontrasting
confidence: 74%
See 3 more Smart Citations
“…ICCs were between .84 (Services subscale) and .96 (Human Resources subscale) for subscales and .91 for total DACTS, all slightly higher than a previous study (McGrew et al, 2011), with the 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 exception of the Services subscale. Mean absolute differences between phone and on-site scores also showed close consensus: .18 or less for all subscales and total DACTS.…”
Section: Discussioncontrasting
confidence: 74%
“…For example, the ICC for phone total DACTS inter-rater reliability (.96) in this study exceeded the ICC found in an earlier study (McGrew et al, 2011) using ACT teams in a single state. In addition, the inter-rater reliability for both remote assessments was relatively close to the nearly perfect inter-rater reliability ICCs for onsite assessment demonstrated across 52 paired ratings in the National Evidence-Based Practices Project (ICC=.99) (McHugo et al, 2007).…”
Section: Discussioncontrasting
confidence: 68%
See 2 more Smart Citations
“…McGrew, Stull, Rollins, Salyers, & Hicks, 2011). In a stepped approach, on-site assessments would be reserved for sites with "trigger" events, such as new team formation, significant staff turnover, or turnover in critical positions (e.g., team leader), and teams experiencing low fidelity scores or other implementation/quality problems.…”
Section: Discussionmentioning
confidence: 99%