An intervention's effectiveness is judged by whether it produces positive outcomes for participants, with the randomized experiment being the gold standard for determining intervention effects. However, the intervention-as-implemented in an experiment frequently differs from the intervention-as-designed, making it unclear whether unfavorable results are due to an ineffective intervention model or the failure to implement the model fully. It is therefore vital to accurately and systematically assess intervention fidelity and, where possible, incorporate fidelity data in the analysis of outcomes. This paper elaborates a five-step procedure for systematically assessing intervention fidelity in the context of randomized controlled trials (RCTs), describes the advantages of assessing fidelity with this approach, and uses examples to illustrate how this procedure can be applied.
Experimental design is the method of choice for establishing whether social interventions have the intended effects on the populations they are presumed to benefit. Experience with field experiments, however, has revealed significant limitations relating chiefly to (a) practical problems implementing random assignment, (b) important uncontrolled sources of variability occurring after assignment, and (c) a low yield of information for explaining why certain effects were or were not found. In response, it is increasingly common for outcome evaluation to draw on some form of program theory and extend data collection to include descriptive information about program implementation, client characteristics, and patterns of change. These supplements often cannot be readily incorporated into standard experimental design, especially statistical analysis. An important advance in outcome evaluation is the recent development of statistical models that are able to represent individual-level change, correlates of change, and program effects in an integrated and informative manner.
The authors used a pretest-posttest control group design with random assignment to evaluate whether early reading failure decreases children's motivation to practice reading. First, they investigated whether 60 first-grade children would report substantially different levels of interest in reading as a function of their relative success or failure in learning to read. Second, they evaluated whether increasing the word reading ability of 15 at-risk children would lead to gains in their motivation to read. Multivariate analyses of variance suggest marked differences in both motivation and reading practice between skilled and unskilled readers. However, bolstering at-risk children's word reading ability did not yield evidence of a causal relationship between early reading failure and decreased motivation to engage in reading activities. Instead, hierarchical regression analyses indicate a covarying relationship among early reading failure, poor motivation, and avoidance of reading.
Objective. To analyze scores on a scale designed to measure helplessness, a cognitive variable, as a possible mediator of the association between formal education level and mortality over 5 years in patients with rheumatoid arthritis (RA).Methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.