2005
DOI: 10.1186/1742-7622-2-8
|View full text |Cite
|
Sign up to set email alerts
|

Assessing observational studies of medical treatments

Abstract: Background: Previous studies have assessed the validity of the observational study design by comparing results of studies using this design to results from randomized controlled trials. The present study examined design features of observational studies that could have influenced these comparisons.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2006
2006
2017
2017

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(10 citation statements)
references
References 64 publications
0
9
0
Order By: Relevance
“…[30][31][32][33][34][35][36][37][38][39][40][41][42][43] A summary of the methods and findings of each the 14 identified studies is presented in Appendix 2 (see Table 42). …”
Section: Quantification Of Bias In Observational Studiesmentioning
confidence: 99%
See 1 more Smart Citation
“…[30][31][32][33][34][35][36][37][38][39][40][41][42][43] A summary of the methods and findings of each the 14 identified studies is presented in Appendix 2 (see Table 42). …”
Section: Quantification Of Bias In Observational Studiesmentioning
confidence: 99%
“…In six of the studies, [33][34][35]37,38,41 data were sourced from published meta-analyses that included both RCTs and NRSs. Five other studies 30,32,39,40,42 took a different approach and searched for NRSs that compared treatment effects and then carried out a further search to locate relevant RCTs.…”
Section: Quantification Of Bias In Observational Studiesmentioning
confidence: 99%
“…The reliance on RCTs as the highest level of evidence is therefore being challenged [27]; even regulatory agencies are now beginning to review evidence from well-designed observational research when making labeling evaluations. On the contrary, the quality of design and reporting of many observational studies has been questioned [28]. The same attention to design, the control of confounding factors, and complete reporting are clearly as necessary for observational studies as for RCTs.…”
Section: Limitations Of Current Guidelinesmentioning
confidence: 99%
“…However, the concept that the assignment of subjects randomly to either experimental or control groups is a perfect science also has been questioned. In contrast to Hartz et al's assessment in 2005 75, Benson and Hartz 76 in a 2000 publication comparing observational studies and randomized controlled trials found little evidence that estimates of treatment effects in observational studies reported after 1984, were either consistently larger than or qualitatively different from those obtained in randomized controlled trials. Furthermore, Hartz et al 77, in a 2003 publication assessing observational studies of chemonucleolysis, concluded that the results suggested that a review of several comparable observational studies may help evaluate treatment, identify patient types most likely to benefit from a given treatment, and provide information about study features that can improve the design of subsequent observational studies or even randomized controlled trials.…”
Section: Discussionmentioning
confidence: 78%
“…The poor quality of reporting in observational intervention studies was reported as a potential factor for confounding bias in 98% of studies 74. In a 2005 publication, Hartz et al 75 assessed observational studies of medical treatments and concluded that reporting was often inadequate for use in comparing the study designs or allowing for any other meaningful interpretation of the results. However, the concept that the assignment of subjects randomly to either experimental or control groups is a perfect science also has been questioned.…”
Section: Discussionmentioning
confidence: 99%