Background An understanding of the individual and combined effects of a set of intervention components is important for moving the science of preventive medicine interventions forward. This understanding can often be achieved in an efficient and economical way via a factorial experiment, in which two or more independent variables are manipulated. The factorial experiment is a complement to the randomized controlled trial (RCT); the two designs address different research questions. Purpose This article offers an introduction to factorial experiments aimed at investigators trained primarily in the RCT. Method The factorial experiment is compared and contrasted with other experimental designs used commonly in intervention science to highlight where each is most efficient and appropriate. Results Several points are made: factorial experiments make very efficient use of experimental subjects when the data are properly analyzed; a factorial experiment can have excellent statistical power even if it has relatively few subjects per experimental condition; and when conducting research to select components for inclusion in a multicomponent intervention, interactions should be studied rather than avoided. Conclusions Investigators in preventive medicine and related areas should begin considering factorial experiments alongside other approaches. Experimental designs should be chosen from a resource management perspective, which states that the best experimental design is the one that provides the greatest scientific benefit without exceeding available resources.
The multiphase optimization strategy (MOST) is a framework for not only evaluating but also optimizing behavioral interventions. A tool critical for MOST is the screening experiment, which enables efficient gathering of information for deciding which components to include in an optimized intervention. This article outlines a procedure for making decisions based on data from a factorial screening experiment. The decision making procedure is illustrated with artificial data generated to resemble empirical data. The illustration suggests that this approach is useful for selecting intervention components and settings based on the results of a factorial screening experiment. It is important to develop methods for making decisions based on factorial screening experiments. The approach demonstrated here is potentially useful, but has limited generalizability. Future research should develop additional decision making procedures for a variety of situations. KEYWORDSComparative effectiveness, Multiphase optimization strategy, Factorial experiments, Behavioral interventions For many years, there has been a heavy emphasis on evaluation of multicomponent behavioral interventions by means of the randomized clinical trial (RCT) and very little emphasis on examination of the individual components making up interventions. In most cases it is unknown whether all of the major components making up a successful intervention contribute to the overall observed effect, or whether expensive or logistically demanding components contribute enough to offset their resource requirements. When new interventions are developed, there may be little disincentive for including many components to try to ensure a significant program effect, even though a significant program effect is no guarantee that all of the components are necessary.Today, there is a growing interest in improving the effectiveness and efficiency of behavioral health, as evidenced by the increase in research comparing the effectiveness of alternative approaches to prevention and treatment of health problems. One approach to hastening progress in this area is to optimize the performance of interventions proactively, before they are evaluated and compared to existing alternatives. The multiphase optimization strategy (MOST) [1] provides one framework for accomplishing this. MOST is a comprehensive, engineering-based approach for behavioral intervention optimization and evaluation. MOST includes the RCT as the gold standard for establishing whether an intervention as a package has a statistically significant effect in comparison to a control group, standard of care, or competing intervention. However, in MOST, additional steps are taken to optimize the intervention systematically, typically in advance of an RCT.In MOST, the term "optimization" has a specific technical meaning: "The process of finding the best possible solution to a problem… subject to given constraints" [2]. Thus, the goal of MOST is not to build the best intervention in some absolute, and perhaps unattainab...
Efficient new technology has made it straightforward for behavioral scientists to collect anywhere from several dozen to several thousand dense, repeated measurements on one or more time-varying variables. These intensive longitudinal data (ILD) are ideal for examining complex change over time, but present new challenges that illustrate the need for more advanced analytic methods. For example, in ILD the temporal spacing of observations may be irregular, and individuals may be sampled at different times. Also, it is important to assess both how the outcome changes over time and the variation between participants' time-varying processes to make inferences about a particular intervention's effectiveness within the population of interest. The methods presented in this article integrate two innovative ILD analytic techniques: functional data analysis and dynamical systems modeling. An empirical application is presented using data from a smoking cessation clinical trial. Study participants provided 42 daily assessments of pre-quit and post-quit withdrawal symptoms. Regression splines were used to approximate smooth functions of craving and negative affect and to estimate the variables' derivatives for each participant. We then modeled the dynamics of nicotine craving using standard input-output dynamical systems models. These models provide a more detailed characterization of the post-quit craving process than do traditional longitudinal models, including information regarding the type, magnitude, and speed of the response to an input. The results, in conjunction with standard engineering control theory techniques, could potentially be used by tobacco researchers to develop a more effective smoking intervention.
Programs delivered in the “real world” often look substantially different from what was originally intended by program developers. Depending on which components of a program are being trimmed or altered, such modifications may seriously undermine the effectiveness of a program. In the present study, these issues are explored within a widely used school-based, non-curricular intervention, Positive Behavioral Intervention and Supports. The present study takes advantage of a uniquely large dataset to gain a better understanding of the “real-world” implementation quality of PBIS, and to take a first step toward identifying the components of PBIS that “matter most” for student outcomes. Data from 27,689 students and 166 public primary and secondary schools across seven states included school and student demographics, indices of PBIS implementation quality, and reports of problem behaviors for any student who received an office discipline referral (ODR) during the 2007-2008 school year. Results of the present study identify three key components of PBIS that many schools are failing to implement properly, three program components that were most related to lower rates of problem behavior (i.e., three “active ingredients” of PBIS), and several school characteristics that help to account for differences across schools in the quality of PBIS implementation. Overall, findings highlight the importance of assessing implementation quality in “real-world” settings, and the need to continue improving understanding of how and why programs work. Findings are discussed in terms of their implications for policy.
No abstract
60439. For information about Argonne and its pioneering science and technology programs, see www.anl.gov.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.