2011
DOI: 10.6004/jnccn.2011.0103
|View full text |Cite
|
Sign up to set email alerts
|

Creating an Effort Tracking Tool to Improve Therapeutic Cancer Clinical Trials Workload Management and Budgeting

Abstract: Quantifying data management and regulatory workload for clinical research is a difficult task that would benefit from a robust tool to assess and allocate effort. As in most clinical research environments, The University of Michigan Comprehensive Cancer Center (UMCCC) Clinical Trials Office (CTO) struggled to effectively allocate data management and regulatory time with frequently inaccurate estimates of how much time was required to complete the specific tasks performed by each role. In a dynamic clinical res… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 4 publications
0
9
0
Order By: Relevance
“…Such issues highlight the importance of having a sound strategic plan when opening clinical trials so that trials can accrue quickly and efficiently. Effort tracking systems may be able to help improve the resource allocation but can be cumbersome to implement in practice [11].…”
Section: Discussionmentioning
confidence: 99%
“…Such issues highlight the importance of having a sound strategic plan when opening clinical trials so that trials can accrue quickly and efficiently. Effort tracking systems may be able to help improve the resource allocation but can be cumbersome to implement in practice [11].…”
Section: Discussionmentioning
confidence: 99%
“…Three studies were published as a series of articles published within the same year on the same topic and are counted once for the purposes of the review, with all articles included in the reference list (Cusack, Jones-Wells, & Chisholm, 2004a, 2004b; Gwede, Johnson, & Trotti, 2000a, 2000b; James et al, 2011a, 2011b; Jones, Cusack, & Chisholm, 2004). For level of evidence, six used instrument development methods (Berridge, Coffey, Lyddiard, & Briggs, 2010; Briggs, 2008; Good, Lubejko, Humphries, & Medders, 2013; Hancock, Wiland, Brown, Kerner-Slemons, & Brown, 1995; James et al, 2011a, 2011b; Moore & Hastings, 2006). Two were prospective studies (Coffey, Berridge, Lyddiard, & Briggs, 2011; Penberthy, Dahman, Petkov, & DeShazo, 2012).…”
Section: Resultsmentioning
confidence: 99%
“…Finally, one was a quality improvement study (Cusack et al, 2004a(Cusack et al, , 2000bJones et al, 2004) and two were expert opinion or commentary (Cassidy & Macfarlane, 1991;Smuck et al, 2011). In relation to psychometric assessments, eight had no reliability and validity data reported (Berridge et al, 2010;Ellis et al, 2012;Good et al, 2013;Gwede et al, 2000aGwede et al, , 2000bJames et al, 2011aJames et al, , 2011bMcCarthy, 1997;Roche et al, 2002;Smuck et al, 2011). Five studies mentioned reliability and validity but no data were presented (Briggs, 2008;Coffey et al, 2011;Cusack et al, 2004aCusack et al, , 2000bJones et al, 2004;Oddone et al, 1995;Penberthy et al, 2012).…”
Section: Acuity Tools Pertaining To Research Intensitymentioning
confidence: 99%
“…In addition, many of the other tools employ complex scoring formulas or ratings that include measurements of time associated with individual trial-related tasks and/or rankings similar to the WPAT but with more detailed ranking options. [8][9][10][11][12] The WPAT was developed with an emphasis on simplicity, reproducibility, and long-term usability.…”
Section: Discussionmentioning
confidence: 99%