This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Background Careful consideration and planning are required to establish “sufficient” evidence to ensure an investment in a larger, more well-powered behavioral intervention trial is worthwhile. In the behavioral sciences, this process typically occurs where smaller-scale studies inform larger-scale trials. Believing that one can do the same things and expect the same outcomes in a larger-scale trial that were done in a smaller-scale preliminary study (i.e., pilot/feasibility) is wishful thinking, yet common practice. Starting small makes sense, but small studies come with big decisions that can influence the usefulness of the evidence designed to inform decisions about moving forward with a larger-scale trial. The purpose of this commentary is to discuss what may constitute sufficient evidence for moving forward to a definitive trial. The discussion focuses on challenges often encountered when conducting pilot/feasibility studies, referred to as common (mis)steps, that can lead to inflated estimates of both feasibility and efficacy, and how the intentional design and execution of one or more, often small, pilot/feasibility studies can play a central role in developing an intervention that scales beyond a highly localized context. Main body Establishing sufficient evidence to support larger-scale, definitive trials, from smaller studies, is complicated. For any given behavioral intervention, the type and amount of evidence necessary to be deemed sufficient is inherently variable and can range anywhere from qualitative interviews of individuals representative of the target population to a small-scale randomized trial that mimics the anticipated larger-scale trial. Major challenges and common (mis)steps in the execution of pilot/feasibility studies discussed are those focused on selecting the right sample size, issues with scaling, adaptations and their influence on the preliminary feasibility and efficacy estimates observed, as well as the growing pains of progressing from small to large samples. Finally, funding and resource constraints for conducting informative pilot/feasibility study(ies) are discussed. Conclusion Sufficient evidence to scale will always remain in the eye of the beholder. An understanding of how to design informative small pilot/feasibility studies can assist in speeding up incremental science (where everything needs to be piloted) while slowing down premature scale-up (where any evidence is sufficient for scaling).
Background Excessive screen time ($$\ge$$ ≥ 2 h per day) is associated with childhood overweight and obesity, physical inactivity, increased sedentary time, unfavorable dietary behaviors, and disrupted sleep. Previous reviews suggest intervening on screen time is associated with reductions in screen time and improvements in other obesogenic behaviors. However, it is unclear what study characteristics and behavior change techniques are potential mechanisms underlying the effectiveness of behavioral interventions. The purpose of this meta-analysis was to identify the behavior change techniques and study characteristics associated with effectiveness in behavioral interventions to reduce children’s (0–18 years) screen time. Methods A literature search of four databases (Ebscohost, Web of Science, EMBASE, and PubMed) was executed between January and February 2020 and updated during July 2021. Behavioral interventions targeting reductions in children’s (0–18 years) screen time were included. Information on study characteristics (e.g., sample size, duration) and behavior change techniques (e.g., information, goal-setting) were extracted. Data on randomization, allocation concealment, and blinding was extracted and used to assess risk of bias. Meta-regressions were used to explore whether intervention effectiveness was associated with the presence of behavior change techniques and study characteristics. Results The search identified 15,529 articles, of which 10,714 were screened for relevancy and 680 were retained for full-text screening. Of these, 204 studies provided quantitative data in the meta-analysis. The overall summary of random effects showed a small, beneficial impact of screen time interventions compared to controls (SDM = 0.116, 95CI 0.08 to 0.15). Inclusion of the Goals, Feedback, and Planning behavioral techniques were associated with a positive impact on intervention effectiveness (SDM = 0.145, 95CI 0.11 to 0.18). Interventions with smaller sample sizes (n < 95) delivered over short durations (< 52 weeks) were associated with larger effects compared to studies with larger sample sizes delivered over longer durations. In the presence of the Goals, Feedback, and Planning behavioral techniques, intervention effectiveness diminished as sample size increased. Conclusions Both intervention content and context are important to consider when designing interventions to reduce children’s screen time. As interventions are scaled, determining the active ingredients to optimize interventions along the translational continuum will be crucial to maximize reductions in children’s screen time.
Objective: Children who fail to meet activity, sleep, and screen-time guidelines are at increased risk for obesity. Further, children who are Black are more likely to have obesity when compared to children who are White, and children from low-income households are at increased risk for obesity when compared to children from higher-income households. The objective of this study was to evaluate the proportion of days meeting obesogenic behavior guidelines during the school year compared to summer vacation by race and free/reduced priced lunch (FRPL) eligibility. Methods: Mixed-effects linear and logistic regressions estimated the proportion of days participants met activity, sleep, and screen-time guidelines during summer and school by race and FRPL eligibility within an observational cohort sample. Results: Children (n = 268, grades = K − 4, 44.1%FRPL, 59.0% Black) attending three schools participated. Children's activity, sleep, and screen-time were collected during an average of 23 school days and 16 days during summer vacation. During school, both children who were White and eligible for FRPL met activity, sleep, and screen-time guidelines on a greater proportion of days when compared to theirBlack and non-eligible counterparts. Significant differences in changes from school to summer in the proportion of days children met activity (−6.2%, 95CI = −10.1%, −2.3%; OR = 0.7, 95CI = 0.6, 0.9) and sleep (7.6%, 95CI = 2.9%, 12.4%; OR = 2.1, 95CI = 1.4, 3.0) guidelines between children who were Black and White were observed. Differences in changes in activity (−8.5%, 95CI = −4.9%, −12.1%; OR = 1.5, 95CI = 1.3, 1.8) were observed between children eligible versus uneligible for FRPL.Conclusions: Summer vacation may be an important time for targeting activity and screen-time of children who are Black and/or eligible for FRPL.This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
Biases introduced in early-stage studies can lead to inflated early discoveries. The risk of generalizability biases (RGBs) identifies key features of feasibility studies that, when present, lead to reduced impact in a larger trial. This meta-study examined the influence of RGBs in adult obesity interventions. Behavioral interventions with a published feasibility study and a larger scale trial of the same intervention (e.g., pairs) were identified. Each pair was coded for the presence of RGBs. Quantitative outcomes were extracted. Multilevel meta-regression models were used to examine the impact of RGBs on the difference in the effect size (ES, standardized mean difference) from pilot to larger scale trial. A total of 114 pairs, representing 230 studies, were identified. Overall, 75% of the pairs had at least one RGB present. The four most prevalent RGBs were duration (33%), delivery agent (30%), implementation support (23%), and target audience (22%) bias. The largest reductions in the ES were observed in pairs where an RGB was present in the pilot and removed in the larger scale trial (average reduction ES À0.41, range À1.06 to 0.01), compared with pairs without an RGB (average reduction ES À0.15, range À0.18 to À0.14). Eliminating RGBs during early-stage testing may result in improved evidence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.