2014
DOI: 10.1177/2168479014546335
|View full text |Cite
|
Sign up to set email alerts
|

Defining a Central Monitoring Capability: Sharing the Experience of TransCelerate BioPharma’s Approach, Part 1

Abstract: Central monitoring, on-site monitoring, and off-site monitoring provide an integrated approach to clinical trial quality management. TransCelerate distinguishes central monitoring from other types of central data review activities and puts it in the context of an overall monitoring strategy. Any organization seeking to implement central monitoring will need people with the right skills, technology options that support a holistic review of study-related information, and adaptable processes. There are different … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(23 citation statements)
references
References 5 publications
0
23
0
Order By: Relevance
“…An evaluation tool was proposed to monitor individual site performance within a multicentre randomised trial.59 (ns)Intensity adjusted score (IAS) = IAS = IS0 + don x IS1 + doff x IS2 where: IS0 = score assigned for enrolling a new participant during the 6 month evaluation perioddon = number of days the participant was on the study medication during the evaluation perioddoff = number of days the participant was off the study medicationIS1 = intensity score for the days in which the participant is receiving study medicationIS2 = intensity score for the days in which the participant is off all study medicationISA is calculated for each participant and then summing scores across all participants, once during the evaluation period• Funding adjusted score = IAS divided by the amount awarded for total direct costs during the given time period• Summary quartiles = total number of new and continuing participants on study Sweetman, 2011 [23]Retrospective analysis of publications of 80 clinical trials on protocol violations reportingNot applicableOccurrence of protocol violations, defined as total number of protocol violations divided by the number of enrolled participants Thom, 2011 [12] a Report of a centre performance assessment tool used within a clinical trial network to assess individual site performanceNot applicable• Protocol adherence, defined as average rate of protocol violations per enrolled participant• Data quality, defined as average rate of edit checks per participant• Data timeliness, defined as the percentage of forms entered late• Time of starting after the first centre start date• Sum of protocol adherence, data quality, data timeliness and timeliness of study start-up to give overall rank• Timeliness of study start-up• Recruitment, defined as average percentage of participants contributed over all studies conducted (B)• Retention, defined as average percentage of participants with complete follow-up data (B)• Recruitment/retention, defined as sum of recruitment + retention to give overall rank (B)• Adherence/quality (A)• Quality of laboratory samples collected (A) Tudur Smith, 2014 [24]Paper describing monitoring methods using a ‘risk proportionate approach’ used by an individual clinical trials unitNot applicable• Consent form completion, defined as consent forms returned within 7 days of completion by sites.• Recruitment process, defined as frequency of eligible participants who do not provide consent.• Missing primary outcome data, defined as cumulative percentage of participants with missing primary outcome data at each site• SAEs, defined as cumulative percentage of participants with at least one SAE across the trial as a whole and at each site /measure of time, e.g. 1 month• Sum of all SAEs/sum of all follow-up for the trial• Sum of all follow-up at site x overall SAE rate for the trial• Visit dates, defined as time between actual date of visit versus expected date of visit• Case report form completion, defined as timely submission (A) Wilson, 2014 [25]Theoretical paper describing methods of monitoring the conduct of trialsNot applicable• Quality metric encompassing: average number of major audit findings per audited site; percentage per site of unreported, confirmed SAEs; number of significant protocol deviations p...…”
Section: Resultsmentioning
confidence: 99%
“…An evaluation tool was proposed to monitor individual site performance within a multicentre randomised trial.59 (ns)Intensity adjusted score (IAS) = IAS = IS0 + don x IS1 + doff x IS2 where: IS0 = score assigned for enrolling a new participant during the 6 month evaluation perioddon = number of days the participant was on the study medication during the evaluation perioddoff = number of days the participant was off the study medicationIS1 = intensity score for the days in which the participant is receiving study medicationIS2 = intensity score for the days in which the participant is off all study medicationISA is calculated for each participant and then summing scores across all participants, once during the evaluation period• Funding adjusted score = IAS divided by the amount awarded for total direct costs during the given time period• Summary quartiles = total number of new and continuing participants on study Sweetman, 2011 [23]Retrospective analysis of publications of 80 clinical trials on protocol violations reportingNot applicableOccurrence of protocol violations, defined as total number of protocol violations divided by the number of enrolled participants Thom, 2011 [12] a Report of a centre performance assessment tool used within a clinical trial network to assess individual site performanceNot applicable• Protocol adherence, defined as average rate of protocol violations per enrolled participant• Data quality, defined as average rate of edit checks per participant• Data timeliness, defined as the percentage of forms entered late• Time of starting after the first centre start date• Sum of protocol adherence, data quality, data timeliness and timeliness of study start-up to give overall rank• Timeliness of study start-up• Recruitment, defined as average percentage of participants contributed over all studies conducted (B)• Retention, defined as average percentage of participants with complete follow-up data (B)• Recruitment/retention, defined as sum of recruitment + retention to give overall rank (B)• Adherence/quality (A)• Quality of laboratory samples collected (A) Tudur Smith, 2014 [24]Paper describing monitoring methods using a ‘risk proportionate approach’ used by an individual clinical trials unitNot applicable• Consent form completion, defined as consent forms returned within 7 days of completion by sites.• Recruitment process, defined as frequency of eligible participants who do not provide consent.• Missing primary outcome data, defined as cumulative percentage of participants with missing primary outcome data at each site• SAEs, defined as cumulative percentage of participants with at least one SAE across the trial as a whole and at each site /measure of time, e.g. 1 month• Sum of all SAEs/sum of all follow-up for the trial• Sum of all follow-up at site x overall SAE rate for the trial• Visit dates, defined as time between actual date of visit versus expected date of visit• Case report form completion, defined as timely submission (A) Wilson, 2014 [25]Theoretical paper describing methods of monitoring the conduct of trialsNot applicable• Quality metric encompassing: average number of major audit findings per audited site; percentage per site of unreported, confirmed SAEs; number of significant protocol deviations p...…”
Section: Resultsmentioning
confidence: 99%
“…23 Central monitoring also displaces resource within the trials unit from dedicated on-site monitors towards the database programmers and statisticians responsible for developing reports and reviewing data, and trial management staff responsible for following up highlighted issues. 24 Resolving these resourcing questions, perhaps partly through different financial arrangements with centres in trials that rely more on central monitoring, could be a necessary precursor to a more widespread adoption of central monitoring methods.…”
Section: Discussionmentioning
confidence: 99%
“…Central monitoring also displaces resource within the trials unit from dedicated on-site monitors towards the database programmers and statisticians responsible for developing reports and reviewing data, and trial management staff responsible for following up highlighted issues. 23 Resolving these resourcing questions, perhaps partly through different financial arrangements with centres in trials that rely more on central monitoring, could be a necessary precursor to a more widespread adoption of central monitoring methods.…”
Section: Discussionmentioning
confidence: 99%