In metagenome analysis, computational methods for assembly, taxonomic profiling and binning are key components facilitating downstream biological data interpretation. However, a lack of consensus about benchmarking datasets and evaluation metrics complicates proper performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on datasets of unprecedented complexity and realism. Benchmark metagenomes were generated from ~700 newly sequenced microorganisms and ~600 novel viruses and plasmids, including genomes with varying degrees of relatedness to each other and to publicly available ones and representing common experimental setups. Across all datasets, assembly and genome binning programs performed well for species represented by individual genomes, while performance was substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below the family level. Parameter settings substantially impacted performances, underscoring the importance of program reproducibility. While highlighting current challenges in computational metagenomics, the CAMI results provide a roadmap for software selection to answer specific research questions.
Cognitive-behavioral treatment (CBT) protocols for panic disorder (PD) consist of a set of interventions that often includes some form of breathing retraining (BR). A controlled outcome study was designed to assess the necessity of BR in the context of a multicomponent CBT protocol. To accomplish this, patients with PD (N = 77) were randomly assigned to receive CBT with or without BR or to a delayed-treatment control. The main study hypothesis was that patients receiving BR would display a less complete recovery relative to the other active-treatment condition given that BR appears to be a more attractive (but less adaptive) option for some patients. Some data suggested that the addition of BR yielded a poorer outcome. However, findings were generally more consistent with treatment equivalence, questioning whether BR produces any incremental benefits in the context of other CBT interventions for PD.
Several recent processor designs have proposed to enhance performance by increasing the clock frequency to the point where timing faults occur, and by adding error-correcting support to guarantee correctness. However, such Timing Speculation (TS) proposals are limited in that they assume traditional design methodologies that are suboptimal under TS. In this paper, we present a new approach where the processor itself is designed from the ground up for TS. The idea is to identify and optimize the most frequently-exercised critical paths in the design, at the expense of the majority of the static critical paths, which are allowed to suffer timing errors. Our approach and design optimization algorithm are called BlueShift. We also introduce two techniques that, when applied under BlueShift, improve processor performance: On-demand Selective Biasing (OSB) and Path Constraint Tuning (PCT). Our evaluation with modules from the OpenSPARC T1 processor shows that, compared to conventional TS, BlueShift with OSB speeds up applications by an average of 8% while increasing the processor power by an average of 12%. Moreover, compared to a high-performance TS design, BlueShift with PCT speeds up applications by an average of 6% with an average processor power overhead of 23% -providing a way to speed up logic modules that is orthogonal to voltage scaling.
In metagenome analysis, computational methods for assembly, taxonomic profiling and binning are key components facilitating downstream biological data interpretation. However, a lack of consensus about benchmarking datasets and evaluation metrics complicates proper performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on datasets of unprecedented complexity and realism. Benchmark metagenomes were generated from ~700 newly sequenced microorganisms and ~600 novel viruses and plasmids, including genomes with varying degrees of relatedness to each other and to publicly available ones and representing common experimental setups. Across all datasets, assembly and genome binning programs performed well for species represented by individual genomes, while performance was substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below the family level. Parameter settings substantially impacted performances, underscoring the importance of program reproducibility. While highlighting current challenges in computational metagenomics, the CAMI results provide a roadmap for software selection to answer specific research questions.. CC-BY 4.0 International license peer-reviewed) is the author/funder. It is made available under a
Rulemaking is an integral component of environmental policy at both the federal and state level; however, rulemaking at the state level is understudied. With this research, we begin to fill that gap by focusing on rulemaking regarding the issue of hydraulic fracturing (fracking) in three states: Colorado, New York, and Ohio. This policy issue is well suited to begin exploring state‐level rulemaking processes because the federal government has left fracking regulation to the states. Through semistructured interviews with a range of actors in the rulemaking process across these states, we establish a foundation from which future research in this area may build. This exploratory research yields some valuable insights into the roles different stakeholders are playing in regulating fracking in these three states, and our findings may be useful for explaining state‐level rulemaking more generally.
Scholars have looked towards collaborative governance as one means to resolve complex environmental issues and this has travelled to the administrative policy realm in the form of negotiated rulemaking or more recently shuttle diplomacy. What is missing from this administrative literature, and that of collaborative governance literature more generally, is a discussion of the role of power in these processes. This paper provides a case study of the Colorado Oil and Gas Conservation Commission's (COGCC) collaborative rule‐making, the Statewide Groundwater Baseline Sampling and Monitoring Rule, to serve as a launch point to evaluate the role of power in state‐level collaborative processes. This research provides original interview data from agency staff and stakeholders to explore the ability of certain groups to exercise the power of agenda control to influence the outcome of the process in the controversial fracking arena. Here, the governor and agency staff were important in defining the problem, but determining the available solutions was more open to privileged stakeholders, namely industry. The interviewees suggested that these dynamics carry over to other COGCC rule‐makings and as such future scholars should evaluate the agenda control strategies of these groups to better understand why collaborative policy‐making, especially in the administrative realm, unfolds as it does. Copyright © 2015 John Wiley & Sons, Ltd and ERP Environment
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.