Regulatory impact analyses (RIAs) weigh the benefits of regulations against the burdens they impose and are invaluable tools for informing decision makers. We offer 10 tips for nonspecialist policymakers and interested stakeholders who will be reading RIAs as consumers.1.Core problem: Determine whether the RIA identifies the core problem (compelling public need) the regulation is intended to address.2.Alternatives: Look for an objective, policy-neutral evaluation of the relative merits of reasonable alternatives.3.Baseline: Check whether the RIA presents a reasonable “counterfactual” against which benefits and costs are measured.4.Increments: Evaluate whether totals and averages obscure relevant distinctions and trade-offs.5.Uncertainty: Recognize that all estimates involve uncertainty, and ask what effect key assumptions, data, and models have on those estimates.6.Transparency: Look for transparency and objectivity of analytical inputs.7.Benefits: Examine how projected benefits relate to stated objectives.8.Costs: Understand what costs are included.9.Distribution: Consider how benefits and costs are distributed.10.Symmetrical treatment: Ensure that benefits and costs are presented symmetrically.
Comparative risk assessment is usually performed to inform risk ranking and prioritization exercises. Here it is applied as an innovative tool for testing the scientific validity and reliability of a 2002 USEPA human health risk assessment of perchlorate. Dietary exposure to nitrate is compared with drinking water exposure to perchlorate; both chemicals act on the thyroid gland by iodide uptake inhibition (IUI). The analysis shows that dietary nitrate is predicted to cause orders of magnitude more IUI than perchlorate exposure at environmental concentrations. If the 2002 USEPA risk assessment is scientifically valid and reliable, then a generally accepted decade-old USEPA nitrate risk assessment is fatally flawed, and risk management decisions based on it are severely under-protective. If the nitrate risk assessment is valid and reliable, however, then the 2002 USEPA perchlorate risk assessment is fatally flawed, unreliable and should not be used as the basis for risk management. The origin of this inconsistency is a policy decision to deem IUI a "key event" that may lead to changes in thyroid hormones and consequent adverse effects. This implicitly treats IUI as "adverse." Unless large and sustained over a long period, however, IUI is mundane, reversible, and arises at exposure levels orders of magnitude below true adverse effects. In communities where quantitative human health risk assessment is expensive or expertise is lacking, comparative exposure assessment provides a cost-effective means to evaluate the merits of such assessments before taking costly risk management actions.
Periodically, ethical objections are raised against the practice of discounting for future effects. Concerns about the potential effects on future generations from long-term nuclear waste disposal and global climate change have caused these ethical objections to recur. This article rebuts the various ethical objections to future discounting on practical, ethical, and analytic grounds. Discounting for future effects is a ubiquitous practice that cannot be practically prevented. In the event that public policy would dictate against future discounting in public decisions, such a constraint could never be successfully imposed on markets. Market values will always reflect the full, discounted streams of future effects even if governments prohibited the practice among individuals. Ethically, there is no basis for choosing an upper-bound time horizon beyond which discounting should be rejected. Any proposed horizon is arbitrary and has no obvious foundation. All decisions are fundamentally irreversible, so opponents of future discounting also must define a degree of irreversibility beyond which normal discounting should not apply, and defend on ethical grounds the basis for this demarcation. This task is further complicated by the likelihood that choices are rarely, if ever, as irreversible as opponents suggest. Typical examples given to prove future discounting is inappropriate overstate the degree of irreversibility actually present and understate subsequent opportunities for modifications. Finally, opposition to distant-future discounting on the ground that burdens are shifted to future generations must face the fact that such shifts are characteristic of intergenerational transfers now practiced widely and with great public support.
Conventional spirometry produces measurement error by using repeatability criteria (RC) to discard acceptable data and terminating tests early when RC are met. These practices also implicitly assume that there is no variation across maneuvers within each test. This has implications for air pollution regulations that rely on pulmonary function tests to determine adverse effects or set standards. We perform a Monte Carlo simulation of 20,902 tests of forced expiratory volume in 1 second (FEV1), each with eight maneuvers, for an individual with empirically obtained, plausibly normal pulmonary function. Default coefficients of variation for inter‐ and intratest variability (3% and 6%, respectively) are employed. Measurement error is defined as the difference between results from the conventional protocol and an unconstrained, eight‐maneuver alternative. In the default model, average measurement error is shown to be ∼5%. The minimum difference necessary for statistical significance at p < 0.05 for a before/after comparison is shown to be 16%. Meanwhile, the U.S. Environmental Protection Agency has deemed single‐digit percentage decrements in FEV1 sufficient to justify more stringent national ambient air quality standards. Sensitivity analysis reveals that results are insensitive to intertest variability but highly sensitive to intratest variability. Halving the latter to 3% reduces measurement error by 55%. Increasing it to 9% or 12% increases measurement error by 65% or 125%, respectively. Within‐day FEV1 differences ≤5% among normal subjects are believed to be clinically insignificant. Therefore, many differences reported as statistically significant are likely to be artifactual. Reliable data are needed to estimate intratest variability for the general population, subpopulations of interest, and research samples. Sensitive subpopulations (e.g., chronic obstructive pulmonary disease or COPD patients, asthmatics, children) are likely to have higher intratest variability, making it more difficult to derive valid statistical inferences about differences observed after treatment or exposure.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.