Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a diagnostic accuracy study. This update incorporates recent evidence about sources of bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy studies.
© 2015 American Association for Clinical ChemistryAs researchers, we talk and write about our studies, not just because we are happy-or disappointed-with the findings, but also to allow others to appreciate the validity of our methods, to enable our colleagues to replicate what we did, and to disclose our findings to clinicians, other health care professionals, and decision-makers, all of whom rely on the results of strong research to guide their actions.Unfortunately, deficiencies in the reporting of research have been highlighted in several areas of clinical medicine (1 ). Essential elements of study methods are often poorly described and sometimes completely omitted, making both critical appraisal and replication difficult, if not impossible. Sometimes study results are selectively reported, and other times researchers cannot resist unwarranted optimism in interpretation of their findings (2-4 ). These practices limit the value of the research and any downstream products or activities, such as systematic reviews and clinical practice guidelines.Reports of studies of medical tests are no exception. A growing number of evaluations have identified deficiencies in the reporting of test accuracy studies (5 ). These are studies in which a test is evaluated against a clinical reference standard, or gold standard; the results are typically reported as estimates of the test's sensitivity and specificity, which express how good the test is in correctly identifying patients as having the target condition. Other accuracy statistics can be used as well, such as the area under the ROC curve or positive and negative predictive values.Despite their apparent simplicity, such studies are at risk of bias (6, 7 ). If not all patients undergoing testing are included in the final analysis, for example, or if only healthy controls are included, the estimates of test accuracy may not reflect the performance of the test in clinical