Abstract:Assertive community treatment (ACT) is a complex community-based service approach to helping people with severe mental disorders live successfully in the community. Effective replication of the model and research on critical elements require explicit criteria and measurement. A measure of program fidelity to ACT and the results of its application to fifty diverse programs are presented.
“…Mean item scores of 4 and above are considered characteristic of established ACT teams. The DACTS has excellent interrater reliability (11) and can differentiate between ACT and other types of intensive case management (7). …”
Objective
This study investigated the reliability, validity, and role of rater expertise in phone-administered fidelity assessment instrument based on the Dartmouth Assertive Community Treatment Scale (DACTS).
Methods
An experienced rater paired with a research assistant without fidelity assessment experience or a consultant familiar with the treatment site conducted phone based assessments of 23 teams providing assertive community treatment in Indiana. Using the DACTS, consultants conducted on-site evaluations of the programs.
Results
The pairs of phone raters revealed high levels of consistency [intraclass correlation coefficient (ICC)=.92] and consensus (mean absolute difference of .07). Phone and on-site assessment showed strong agreement (ICC=.87) and consensus (mean absolute difference of .07) and agreed within .1 scale point, or 2% of the scoring range, for 83% of sites and within .15 scale point for 91% of sites. Results were unaffected by the expertise level of the rater.
Conclusions
Phone based assessment could help agencies monitor faithful implementation of evidence-based practices.
“…Mean item scores of 4 and above are considered characteristic of established ACT teams. The DACTS has excellent interrater reliability (11) and can differentiate between ACT and other types of intensive case management (7). …”
Objective
This study investigated the reliability, validity, and role of rater expertise in phone-administered fidelity assessment instrument based on the Dartmouth Assertive Community Treatment Scale (DACTS).
Methods
An experienced rater paired with a research assistant without fidelity assessment experience or a consultant familiar with the treatment site conducted phone based assessments of 23 teams providing assertive community treatment in Indiana. Using the DACTS, consultants conducted on-site evaluations of the programs.
Results
The pairs of phone raters revealed high levels of consistency [intraclass correlation coefficient (ICC)=.92] and consensus (mean absolute difference of .07). Phone and on-site assessment showed strong agreement (ICC=.87) and consensus (mean absolute difference of .07) and agreed within .1 scale point, or 2% of the scoring range, for 83% of sites and within .15 scale point for 91% of sites. Results were unaffected by the expertise level of the rater.
Conclusions
Phone based assessment could help agencies monitor faithful implementation of evidence-based practices.
“…Fidelity was assessed with six criteria from the Dartmouth Assertive Community Treatment Scale (DACTS) (9): in vivo service delivery, 1:10 staff-to-client ratio, 1:100 psychiatrist-to-client ratio, 24-hour availability for crises, time-unlimited services, and substance abuse counselor on staff. Programs were required to meet at least four of the six DACTS criteria along with the three screening criteria to qualify for study inclusion as a FACT program.…”
Objective
Forensic assertive community treatment (FACT) is an adaptation of the assertive community treatment model designed to prevent criminal recidivism through criminal justice collaborations. A national survey was conducted to examine FACT collaborations with probation departments.
Methods
Members of the National Association of County Behavioral Health and Developmental Disability Directors were surveyed to identify FACT programs. Programs reporting collaborations with probation departments were contacted to provide details.
Results
Fifty-six percent of FACT programs (15 of 27) reported collaborating with probation departments. Probation officers were assigned an average of 29±16 hours weekly, and 80% of programs (12 of 15) reported a favorable impact of collaboration on risk of patient rearrest. Only two programs reported using standard tools to formally assess recidivism risk. The most common barrier to collaboration was differences in philosophy between FACT team clinicians and probation officers.
Conclusions
FACT collaborations involving probation departments are common and are viewed by most program leaders as helpful in reducing criminal recidivism.
“…However, in ACT, compared to more recent evidence-based models, the development of fidelity criteria (Teague et al, 1998) and the program manual (Allness & Knoedler, 1998) occurred much later (1994)(1995)(1996)(1997)(1998) than the original efficacy study (Stein & Test, 1980). The first scale developed to assess fidelity to ACT principles, in fact, followed the expert opinion method for fidelity development approach (2) above: reviewing published descriptions of the model, constructing a list of proposed critical ingredients, then having ACT experts (academics and practitioners) rate the importance of each ingredient (McGrew et al, 1994).…”
Section: Methods To Develop Fidelity Criteriamentioning
confidence: 99%
“…The first scale developed to assess fidelity to ACT principles, in fact, followed the expert opinion method for fidelity development approach (2) above: reviewing published descriptions of the model, constructing a list of proposed critical ingredients, then having ACT experts (academics and practitioners) rate the importance of each ingredient (McGrew et al, 1994). Subsequent ACT fidelity studies have built on these criteria, adjusting for specific settings and/or populations (Johnsen et al, 1999;Teague et al, 1995) and revising on the basis of new literature and measurement practicality (Teague et al, 1998). This general method has also been used to identify fidelity criteria for consumer-operated programs (Holter, Mowbray, Bellamy, & MacFarlane, in press).…”
Section: Methods To Develop Fidelity Criteriamentioning
confidence: 99%
“…note that the development of fidelity measures is hampered by the lack of well-defined models and that the identification of fidelity criteria and development of fidelity scales for ACT were so successful because this model was well-developed and its operations were specified in detail. Development of fidelity criteria is more difficult with complex interventions that depend on practitioner decision-making, using clinical expertise, on individualizing services to meet the multiple needs and preferences of consumers, or on behaviors of multiple practitioners, structural variables, or service coordination Teague et al, 1998).…”
Section: Issues In Establishing Fidelity Criteriamentioning
Fidelity may be defined as the extent to which delivery of an intervention adheres to the protocol or program model originally developed. Fidelity measurement has increasing significance for evaluation, treatment effectiveness research, and service administration. Yet few published studies using fidelity criteria provide details on the construction of a valid fidelity index. The purpose of this review article is to outline steps in the development, measurement, and validation of fidelity criteria, providing examples from health and education literatures. We further identify important issues in conducting each step. Finally, we raise questions about the dynamic nature of fidelity criteria, appropriate validation and statistical analysis methods, the inclusion of structure and process criteria in fidelity assessment, and the role of program theory in deciding on the balance between adaptation versus exact replication of model programs. Further attention to the use and refinement of fidelity criteria is important to evaluation practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.