Regulatory impact analyses (RIAs) weigh the benefits of regulations against the burdens they impose and are invaluable tools for informing decision makers. We offer 10 tips for nonspecialist policymakers and interested stakeholders who will be reading RIAs as consumers.1.Core problem: Determine whether the RIA identifies the core problem (compelling public need) the regulation is intended to address.2.Alternatives: Look for an objective, policy-neutral evaluation of the relative merits of reasonable alternatives.3.Baseline: Check whether the RIA presents a reasonable “counterfactual” against which benefits and costs are measured.4.Increments: Evaluate whether totals and averages obscure relevant distinctions and trade-offs.5.Uncertainty: Recognize that all estimates involve uncertainty, and ask what effect key assumptions, data, and models have on those estimates.6.Transparency: Look for transparency and objectivity of analytical inputs.7.Benefits: Examine how projected benefits relate to stated objectives.8.Costs: Understand what costs are included.9.Distribution: Consider how benefits and costs are distributed.10.Symmetrical treatment: Ensure that benefits and costs are presented symmetrically.
The emergence of behavioral public administration has led to increasing calls for public managers and policy makers to consider predictable cognitive biases when regulating individual behaviors or market transactions. Recognizing that cognitive biases can also affect the regulators themselves, this article attempts to understand how the institutional environment in which regulators operate interacts with their cognitive biases. In other words, to what extent does the "choice architecture" that regulators face reinforce or counteract predictable cognitive biases? Just as knowledge of behavioral insights can help regulators design a choice architecture that frames individual decisions to encourage welfare-enhancing choices, it may help governments understand and design institutions to counter cognitive biases in regulators that contribute to deviations from public interest policies. From these observations, the article offers some modest suggestions for improving the regulatory choice architecture.
After a decades-long decline in the availability of abortion training, opportunities for abortion training have increased. However, there is reason to be cautious in interpreting these results, including possible response bias and pressure to report the availability of abortion training because of new guidelines from the Accreditation Council for Graduate Medical Education.
Behavioral research has shown that individuals do not always behave in ways that match textbook definitions of rationality but are subject to cognitive biases that may lead to systematic errors in judgments and decisions. Recognizing that regulators are not immune from these cognitive irrationalities, this article explores how the institutional framework or "choice architecture" in which they operate interacts with those biases. By examining five cognitive biases that may be prevalent among regulators, it discusses the extent to which the institutions regulators face reinforce or counteract the influence of cognitive biases in regulatory decision making. Just as behavioral insights can help design a choice architecture to frame individual decisions in ways that encourage welfare-enhancing choices, consciously designing regulators' institutional frameworks with behavioral insights in mind could lead to more public-welfare-enhancing policies. The article concludes with some modest ideas for improving regulators' choice architecture and suggestions for further research.
Federal and other regulatory agencies often use or claim to use a weight of evidence (WoE) approach in chemical evaluation. Their approaches to the use of WoE, however, differ significantly, rely heavily on subjective professional judgment, and merit improvement. We review uses of WoE approaches in key articles in the peer-reviewed scientific literature, and find significant variations. We find that a hypothesis-based WoE approach, developed by Lorenz Rhomberg et al., can provide a stronger scientific basis for chemical assessment while improving transparency and preserving the appropriate scope of professional judgment. Their approach, while still evolving, relies on the explicit specification of the hypothesized basis for using the information at hand to infer the ability of an agent to cause human health impacts or, more broadly, affect other endpoints of concern. We describe and endorse such a hypothesis-based WoE approach to chemical evaluation.
This paper explores the motivations and institutional incentives of participants involved in the development of regulation aimed at reducing health risks, with a goal of understanding and identifying solutions to what the Bipartisan Policy Center has characterized as "a tendency to frame regulatory issues as debates solely about science, regardless of the actual subject in dispute, [that] is at the root of the stalemate and acrimony all too present in the regulatory system today." We focus our analysis with a case study of the procedures for developing National Ambient Air Quality Standards under the Clean Air Act, and attempt to identify procedural approaches that bring greater diversity (in data, expertise, experience, and accountability) into the decision process. 1 This working paper, which has been submitted to the Supreme Court Economic Review, reflects the views of the authors and does not represent an official position of the GW Regulatory Studies Center or the George Washington University. The Center's policy on research integrity is available at http://regulatorystudies.columbian.gwu.edu/policy-research-integrity.
ON 7 MAY 2004, Science published the Report "RNA-mediated metal-metal bond formation in the synthesis of hexagonal palladium nanoparticles" by Lina A. Gugliotti, Daniel L. Feldheim, and Bruce E. Eaton (1). After an investigation by the U.S. National Science Foundation's (NSF's) Office of Inspector General, NSF did not find that the authors' actions constituted misconduct. NSF nonetheless concluded that they "were a significant departure from research practices" and "a misrepresentation of data on which a conclusion was based" (2). In response to the NSF ruling, author Feldheim sent wording for a correction to Science. However, the Editors do not think a correction is appropriate given the concerns raised by the Inspector General's report about what evidence was available to support the authors' assertions at the time the paper was published. Hence, Science is issuing this Retraction instead.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.