2018
DOI: 10.1186/s13063-018-2941-8
|View full text |Cite
|
Sign up to set email alerts
|

Monitoring performance of sites within multicentre randomised trials: a systematic review of performance metrics

Abstract: BackgroundLarge multicentre trials are complex and expensive projects. A key factor for their successful planning and delivery is how well sites meet their targets in recruiting and retaining participants, and in collecting high-quality, complete data in a timely manner. Collecting and monitoring easily accessible data relevant to performance of sites has the potential to improve trial management efficiency. The aim of this systematic review was to identify metrics that have either been proposed or used for mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 28 publications
0
5
0
Order By: Relevance
“…Improving the quality of MCT will make sure both time and money are effectively spent. Thus, MCT improvement is important for funders, researchers, clinicians and policymakers as well as patients [8][9] . Unfortunately, previous researchers have found many problems in the MCTs, such as (1) lack of criteria for center selection, if some centers with delayed start-up, unmet target recruitment, and poor data quality were participated, thereby contributing to cost inefficiencies in resource and time allocation [10][11] ; (2) inadequate analysis on heterogeneity of the MCT, especially in terms of baseline characteristics and treatment efficacy across different centers [12][13] ; (3) absence reporting on center effect, if an imbalance sample size distributed across centers, relevant adjustment(s) or analysis for excessive center effect could be concerned [14][15] ; and (4) lack of data monitoring (e.g., on central monitoring techniques or onsite monitoring) used to ensure data quality across centers [16][17] .…”
Section: Introductionmentioning
confidence: 99%
“…Improving the quality of MCT will make sure both time and money are effectively spent. Thus, MCT improvement is important for funders, researchers, clinicians and policymakers as well as patients [8][9] . Unfortunately, previous researchers have found many problems in the MCTs, such as (1) lack of criteria for center selection, if some centers with delayed start-up, unmet target recruitment, and poor data quality were participated, thereby contributing to cost inefficiencies in resource and time allocation [10][11] ; (2) inadequate analysis on heterogeneity of the MCT, especially in terms of baseline characteristics and treatment efficacy across different centers [12][13] ; (3) absence reporting on center effect, if an imbalance sample size distributed across centers, relevant adjustment(s) or analysis for excessive center effect could be concerned [14][15] ; and (4) lack of data monitoring (e.g., on central monitoring techniques or onsite monitoring) used to ensure data quality across centers [16][17] .…”
Section: Introductionmentioning
confidence: 99%
“…Monitoring of clinical research centres usually concentrates on purely regulatory aspects or performance metrics [3,4]. While there are studies of the effectiveness of external inspections in improving standard healthcare [5], this is to our knowledge the first external analysis of a large and systematic campaign of regulatory inspections of clinical research centres performing early pharmacology studies aiming at assessing the medical relevance of these inspections.…”
Section: Discussionmentioning
confidence: 99%
“…19 This vision of a comprehensive set of metrics contrasts strongly with a vision of a core set that could be used by all multicentre trials, proposed by Whitham et al 20 They used a Delphi Process to choose a set of eight key performance metrics from a large set of performance metrics identified in a systematic literature review of studies that proposed or used metrics for monitoring or measuring performance. 21 These suggested metrics have not been tested systematically for monitoring effectiveness. Whitham et al concluded that future research should evaluate the effectiveness of using their core metrics, 20 and TransCelerate only called on industry partners to volunteer what had worked or not worked, rating metric changes over time as "better", "worse" or "about the same".…”
Section: Introductionmentioning
confidence: 99%
“…19 This vision of a comprehensive set of metrics contrasts strongly with a vision of a core set that could be used by all multicentre trials, proposed by Whitham et al 20 They used a Delphi Process to choose a set of eight key performance metrics from a large set of performance metrics identified in a systematic literature review of studies that proposed or used metrics for monitoring or measuring performance. 21…”
Section: Introductionmentioning
confidence: 99%