2020
DOI: 10.3389/fonc.2020.00978
|View full text |Cite
|
Sign up to set email alerts
|

Registering Study Analysis Plans (SAPs) Before Dissecting Your Data—Updating and Standardizing Outcome Modeling

Abstract: Public preregistration of study analysis plans (SAPs) is widely recognized for clinical trials, but adopted to a much lesser extent in observational studies. Registration of SAPs prior to analysis is encouraged to not only increase transparency and exactness but also to avoid positive finding bias and better standardize outcome modeling. Efforts to generally standardize outcome modeling, which can be based on clinical trial and/or observational data, have recently spurred. We suggest a three-step SAP concept i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 23 publications
(41 reference statements)
0
3
0
Order By: Relevance
“…The statistical analysis plan is an important part of registration and a required component of the CONSORT and PRISMA checklists 148,159–161 . Interestingly, while this was missing in 85.3% ( n = 29/34) of the RCT registrations, it was present in the majority of MAs (75%, n = 27/36), and the MAs had this component completed in the registration and matched what was reported in the study.…”
Section: Discussionmentioning
confidence: 99%
“…The statistical analysis plan is an important part of registration and a required component of the CONSORT and PRISMA checklists 148,159–161 . Interestingly, while this was missing in 85.3% ( n = 29/34) of the RCT registrations, it was present in the majority of MAs (75%, n = 27/36), and the MAs had this component completed in the registration and matched what was reported in the study.…”
Section: Discussionmentioning
confidence: 99%
“…Similar to a clinical trial, we preregistered our methodology in a Study Analysis Plan 21 before obtaining access to the validation data set to prevent selective reporting and positive bias. 22,23 The predictive model was developed on the basis of an internal cohort treated at Massachusetts General Hospital (MGH), locked and only then validated using an independent cohort treated at MD Anderson Cancer Center (MDACC), representing a TRIPOD type 3 study. 24 We developed a Cox proportional hazards model and a random survival forest to stratify patients into risk groups for mortality, and classification models to predict four binary end points: 1-year survival ( SRVy1 ); 1-year nonlocal failure ( NLFy1 ); nonclassic RILD, defined as 2+ increase in Child-Pugh (CP) score after 3 months of treatment ( CP2+ ); and radiation-induced grade 3+ lymphopenia ( RIL ).…”
Section: Methodsmentioning
confidence: 99%
“…Until then, all items in the original TRIPOD statement are applicable to deep learning models, even when the terminology may at times differ. Other 'best practices' from classic outcome prediction modelling should also be strived for, including use of prospectively registered study protocols and data analysis plans [56] and publication of full models and code for independent validationboth of which were also lacking in the reviewed papers. It is worth noting that many of these issues were already highlighted in the QUANTEC papers over a decade ago [52,57].…”
Section: Current Challenges and Opportunitiesmentioning
confidence: 99%