The limits of agreement approach of Bland and Altman is by far the most popular method for investigating statistical agreement between two measurement devices. This work presents the dangers of relying exclusively on the limits of agreement alone and argues that authors should always provide confidence intervals to assess the variability in the estimated limits.
Response misclassification of counted data biases and understates the uncertainty of parameter estimators in Poisson regression models. To correct these problems, researchers have devised classical procedures that rely on asymptotic distribution results and supplemental validation data in order to estimate unknown misclassification parameters. We derive a new Bayesian Poisson regression procedure that accounts and corrects for misclassification for a count variable with two categories. Under the Bayesian paradigm, one can use validation data, expert opinion, or a combination of these two approaches to correct for the consequences of misclassification. The Bayesian procedure proposed here yields an operationally effective way to correct and account for misclassification effects in Poisson count regression models. We demonstrate the performance of the model in a simulation study. Additionally, we analyze two real-data examples and compare our new Bayesian inference method that adjusts for misclassification with a similar analysis that ignores misclassification.
When assessing comparative effectiveness or safety in observational research, the impact of unmeasured confounding should not be ignored. Instead, we suggest quantitatively evaluating the impact of unmeasured confounding and provided a best practice recommendation for selecting appropriate analytical methods.
Purpose: We review statistical methods for assessing the possible impact of bias due to unmeasured confounding in real world data analysis and provide detailed recommendations for choosing among the methods. Methods: By updating an earlier systematic review, we summarize modern statistical best practices for evaluating and correcting for potential bias due to unmeasured confounding in estimating causal treatment effect from non-interventional studies. Results: We suggest a hierarchical structure for assessing unmeasured confounding. First, for initial sensitivity analyses, we strongly recommend applying a recently developed method, the E-value, that is straightforward to apply and does not require prior knowledge or assumptions about the unmeasured confounder(s). When some such knowledge is available, the E-value could be supplemented by the rule-out or array method at this step. If these initial analyses suggest results may not be robust to unmeasured confounding, subsequent analyses could be conducted using more specialized statistical methods, which we categorize based on whether they require access to external data on the suspected unmeasured confounder(s), internal data, or no data. Other factors for choosing the subsequent sensitivity analysis methods are also introduced and discussed, including the types of unmeasured confounders and whether the subsequent sensitivity analysis is intended to provide a corrected causal treatment effect. Conclusion: Various analytical methods have been proposed to address unmeasured confounding, but little research has discussed a structured approach to select appropriate methods in practice. In providing practical suggestions for choosing appropriate initial and, potentially, more specialized subsequent sensitivity analyses, we hope to facilitate the widespread reporting of such sensitivity analyses in non-interventional studies. The suggested approach also has the potential to inform pre-specification of sensitivity analyses before executing the analysis, and therefore increase the transparency and limit selective study reporting.
We consider the problem of variable selection for logistic regression when the dependent variable is measured imperfectly, under both differential and non-differential misclassification. An MCMC sampling scheme is designed, incorporating uncertainty about which explanatory variables affect the dependent variable and which affect the probability of misclassification. We assume that a small gold standard perfectly measured sample is available to augment the imperfectly measured sample, under the differential misclassification framework. A simulation study illustrates favourable results both in terms of variable selection and parameter estimation. Examples analysing the risk of violence against young women by their partner and the risk of injury in highway motor accidents are considered.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.