Rationale, aims and objectives: The diversity of types of evidence (eg, case reports, animal studies and observational studies) makes the assessment of a drug's safety profile into a formidable challenge. While frequentist uncertain inference struggles in aggregating these signals, the more flexible Bayesian approaches seem better suited for this quest. Artificial Intelligence (AI) offers great promise to these approaches for information retrieval, decision support, and learning probabilities from data.Methods: E-Synthesis is a Bayesian framework for drug safety assessments built on philosophical principles and considerations. It aims to aggregate all the available information, in order to provide a Bayesian probability of a drug causing an adverse reaction. AI systems are being developed for evidence aggregation in medicine, which increasingly are automated.Results: We find that AI can help E-Synthesis with information retrieval, usability (graphical decision-making aids), learning Bayes factors from historical data, assessing quality of information and determining conditional probabilities for the so-called 'indicators' of causation for E-Synthesis. Vice versa, E-Synthesis offers a solid methodological basis for (semi-)automated evidence aggregation with AI systems.Conclusions: Properly applied, AI can help the transition of philosophical principles and considerations concerning evidence aggregation for drug safety to a tool that can be used in practice.
Contemporary debates about scientific institutions and practice feature many proposed reforms. Most of these require increased efforts from scientists. But how do scientists’ incentives for effort interact? How can scientific institutions encourage scientists to invest effort in research? We explore these questions using a game-theoretic model of publication markets. We employ a base game between authors and reviewers, before assessing some of its tendencies by means of analysis and simulations. We compare how the effort expenditures of these groups interact in our model under a variety of settings, such as double-blind and open review systems. We make a number of findings, including that open review can increase the effort of authors in a range of circumstances and that these effects can manifest in a policy-relevant period of time. However, we find that open review’s impact on authors’ efforts is sensitive to the strength of several other influences.
The debates between Bayesian, frequentist, and other methodologies of statistics have tended to focus on conceptual justifications, sociological arguments, or mathematical proofs of their long run properties. Both Bayesian statistics and frequentist (“classical”) statistics have strong cases on these grounds. In this article, we instead approach the debates in the “Statistics Wars” from a largely unexplored angle: simulations of different methodologies’ performance in the short to medium run. We used Big Data methods to conduct a large number of simulations using a straightforward decision problem based around tossing a coin with unknown bias and then placing bets. In this simulation, we programmed four players, inspired by Bayesian statistics, frequentist statistics, Jon Williamson’s version of Objective Bayesianism, and a player who simply extrapolates from observed frequencies to general frequencies. The last player served a benchmark function: any worthwhile statistical methodology should at least match the performance of simplistic induction. We focused on the performance of these methodologies in guiding the players towards good decisions. Unlike an earlier simulation study of this type, we found no systematic difference in performance between the Bayesian and frequentist players, provided the Bayesian used a flat prior and the frequentist used a low confidence level. The Williamsonian player was also able to perform well given a low confidence level. However, the frequentist and Williamsonian players performed poorly with high confidence levels, while the Bayesian was surprisingly harmed by biased priors. Our study indicates that all three methodologies should be taken seriously by philosophers and practitioners of statistics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.