Effective ecosystem risk assessment relies on a conceptual understanding of ecosystem dynamics and the synthesis of multiple lines of evidence. Risk assessment protocols and ecosystem models integrate limited observational data with threat scenarios, making them valuable tools for monitoring ecosystem status and diagnosing key mechanisms of decline to be addressed by management. We applied the IUCN Red List of Ecosystems criteria to quantify the risk of collapse of the Meso-American Reef, a unique ecosystem containing the second longest barrier reef in the world. We collated a wide array of empirical data (field and remotely sensed), and used a stochastic ecosystem model to backcast past ecosystem dynamics, as well as forecast future ecosystem dynamics under 11 scenarios of threat. The ecosystem is at high risk from mass bleaching in the coming decades, with compounding effects of ocean acidification, hurricanes, pollution and fishing. The overall status of the ecosystem is Critically Endangered (plausibly Vulnerable to Critically Endangered), with notable differences among Red List criteria and data types in detecting the most severe symptoms of risk. Our case study provides a template for assessing risks to coral reefs and for further application of ecosystem models in risk assessment.
The optimal design of reserve networks and fisheries closures depends on species occurrence information and knowledge of how anthropogenic impacts interact with the species concerned. However, challenges in surveying mobile and cryptic species over adequate spatial and temporal scales can mask the importance of particular habitats, leading to uncertainty about which areas to protect to optimize conservation efforts. We investigated how telemetry‐derived locations can help guide the scale and timing of fisheries closures with the aim of reducing threatened species bycatch. Forty juvenile speartooth sharks (Glyphis glyphis) were monitored over 22 months with implanted acoustic transmitters and an array of hydrophone receivers. Using the decision‐support tool Marxan, we formulated a permanent fisheries closure that prioritized areas used more frequently by tagged sharks and considered areas perceived as having high value to fisheries. To explore how the size of the permanent closure compared with an alternative set of time‐area closures (i.e., where different areas were closed to fishing at different times of year), we used a cluster analysis to group months that had similar arrangements of selected planning units (informed by shark movements during that month) into 2 time‐area closures. Sharks were consistent in their timing and direction of migratory movements, but the number of tagged sharks made a big difference in the placement of the permanent closure; 30 individuals were needed to capture behavioral heterogeneity. The dry‐season (May–January) and wet‐season (February–April) time‐area closures opened 20% and 25% more planning units to fishing, respectively, compared with the permanent closure with boundaries fixed in space and time. Our results show that telemetry has the potential to inform and improve spatial management of mobile species and that the temporal component of tracking data can be incorporated into prioritizations to reduce possible impacts of spatial closures on established fisheries.
Energy consumption is one of the top challenges for achieving the next generation of supercomputing. Codesign of hardware and software is critical for improving energy efficiency (EE) for future large-scale systems. Many architectural power-saving techniques have been developed, and most hardware components are approaching physical limits. Accordingly, parallel computing software, including both applications and systems, should exploit power-saving hardware innovations and manage efficient energy use. In addition, new power-aware parallel computing methods are essential to decrease energy usage further. This article surveys software-based methods that aim to improve EE for parallel computing. It reviews the methods that exploit the characteristics of parallel scientific applications, including load imbalance and mixed precision of floating-point (FP) calculations, to improve EE. In addition, this article summarizes widely used methods to improve power usage at different granularities, such as the whole system and per application. In particular, it describes the most important techniques to measure and to achieve energy-efficient usage of various parallel computing facilities, including processors, memories, and networks. Overall, this article reviews the state-of-the-art of energy-efficient methods for parallel computing to motivate researchers to achieve optimal parallel computing under a power budget constraint.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.