Anthropogenic warming has led to an unprecedented year-round reduction in Arctic sea ice extent. This has far-reaching consequences for indigenous and local communities, polar ecosystems, and global climate, motivating the need for accurate seasonal sea ice forecasts. While physics-based dynamical models can successfully forecast sea ice concentration several weeks ahead, they struggle to outperform simple statistical benchmarks at longer lead times. We present a probabilistic, deep learning sea ice forecasting system, IceNet. The system has been trained on climate simulations and observational data to forecast the next 6 months of monthly-averaged sea ice concentration maps. We show that IceNet advances the range of accurate sea ice forecasts, outperforming a state-of-the-art dynamical model in seasonal forecasts of summer sea ice, particularly for extreme sea ice events. This step-change in sea ice forecasting ability brings us closer to conservation tools that mitigate risks associated with rapid sea ice loss.
Empirical networks often exhibit different meso-scale structures, such as community and core–periphery structures. Core–periphery structure typically consists of a well-connected core and a periphery that is well connected to the core but sparsely connected internally. Most core–periphery studies focus on undirected networks. We propose a generalization of core–periphery structure to directed networks. Our approach yields a family of core–periphery block model formulations in which, contrary to many existing approaches, core and periphery sets are edge-direction dependent. We focus on a particular structure consisting of two core sets and two periphery sets, which we motivate empirically. We propose two measures to assess the statistical significance and quality of our novel structure in empirical data, where one often has no ground truth. To detect core–periphery structure in directed networks, we propose three methods adapted from two approaches in the literature, each with a different trade-off between computational complexity and accuracy. We assess the methods on benchmark networks where our methods match or outperform standard methods from the literature, with a likelihood approach achieving the highest accuracy. Applying our methods to three empirical networks—faculty hiring, a world trade dataset and political blogs—illustrates that our proposed structure provides novel insights in empirical networks.
This is a non-peer reviewed preprint submitted to EarthArXiv. If published in a journal, a link to the final version of the manuscript will be available via the 'Peer-reviewed Publication DOI' link on the right-hand side of this webpage. Anthropogenic warming has led to an unprecedented year-round reduction in Arctic sea ice extent 1,2 . This has far-reaching consequences for indigenous and local communities, polar ecosystems, and global climate, motivating the need for accurate seasonal sea ice forecasts. While physics-based dynamical models can successfully forecast sea ice concentration several weeks ahead, they struggle to outperform simple statistical models at longer lead times 3,4 and calibrating their forecasts can be challenging 5 . We present a probabilistic, deep learning 6 sea ice forecasting system, IceNet. The system has been trained on climate simulations covering 1850-2100 and observational data from 1979-2011 to forecast the next 6 months of monthly-averaged sea ice concentration maps. IceNet advances the range of accurate sea ice forecasts, outperforming a state-of-the-art dynamical model 7 in seasonal forecasts of summer sea ice. It also demonstrates a greater ability to predict anomalous pan-Arctic sea ice extents than the models submitted to the Sea Ice Outlook programme 8 . In addition, IceNet's well-calibrated probabilistic forecasts mean it can reliably bound the ice edge between two contours. IceNet's accuracy and reliability represent a step-change in sea ice forecasting, providing a robust framework to build early-warning systems and conservation tools that mitigate risks associated with rapid sea ice loss.Near-surface air temperatures in the Arctic have increased at roughly twice the rate of the global average, a phenomenon known as 'Arctic amplification', caused by a number of positive feedbacks 1,2,9 . Rising temperatures have played a key role in reducing Arctic sea ice, with September sea ice extent now around half that of 1979 when satellite measurements of the Arctic began 10 . This downward trend will continue, even in optimistic greenhouse gas emission reduction scenarios 11 . Climate simulations project the Arctic to be ice free in the summer by 2050 12 . Other studies put this date as early as the 2030s 13 . Such unprecedented sea ice loss has profound local and regional consequences: it is the greatest threat to polar bear populations 14 ; it has increased the intensity and frequency of algal blooms that propagate toxins throughout the food web 15 ; and it poses significant challenges for Indigenous Peoples, with impacts ranging from food security 15 to loss of culture 16 .
MotivationOur work is motivated by an interest in constructing a protein–protein interaction network that captures key features associated with Parkinson’s disease. While there is an abundance of subnetwork construction methods available, it is often far from obvious which subnetwork is the most suitable starting point for further investigation.ResultsWe provide a method to assess whether a subnetwork constructed from a seed list (a list of nodes known to be important in the area of interest) differs significantly from a randomly generated subnetwork. The proposed method uses a Monte Carlo approach. As different seed lists can give rise to the same subnetwork, we control for redundancy by constructing a minimal seed list as the starting point for the significance test. The null model is based on random seed lists of the same length as a minimum seed list that generates the subnetwork; in this random seed list the nodes have (approximately) the same degree distribution as the nodes in the minimum seed list. We use this null model to select subnetworks which deviate significantly from random on an appropriate set of statistics and might capture useful information for a real world protein–protein interaction network.Availability and implementationThe software used in this paper are available for download at . The software is written in Python and uses the NetworkX library. Supplementary information Supplementary data are available at Bioinformatics online.
Objective. To explore how lifestyle and demographic, socioeconomic, and disease-related factors are associated with supervised exercise adherence in an osteoarthritis (OA) management program and the ability of these factors to explain exercise adherence.Methods. A cohort register-based study on participants from the Swedish Osteoarthritis Registry who attended the exercise part of a nationwide Swedish OA management program. We ran a multinomial logistic regression to determine the association of exercise adherence with the abovementioned factors. We calculated their ability to explain exercise adherence with the McFadden R 2 .Results. Our sample comprises 19,750 participants (73% female, mean ± SD age 67 ± 8.9 years). Among them, 5,862 (30%) reached a low level of adherence, 3,947 (20%) a medium level, and 9,941 (50%) a high level. After a listwise deletion, the analysis was run on 16,685 participants (85%), with low levels of adherence as the reference category. Some factors were positively associated with high levels of adherence, such as older age (relative risk ratio [RRR] 1.01 [95% confidence interval (95% CI) 1.01-1.02] per year), and the arthritis-specific self-efficacy (RRR 1.04 [95% CI 1.02-1.07] per 10-point increase). Others were negatively associated with high levels of adherence, such as female sex (RRR 0.82 [95% CI 0.75-0.89]), having a medium (RRR 0.89 [95% CI 0.81-0.98] or a high level of education ). Nevertheless, the investigating factors could explain 1% of the variability in exercise adherence (R 2 = 0.012).Conclusion. Despite the associations reported above, the poorly explained variability suggests that strategies based on lifestyle and demographic, socioeconomic, and disease-related factors are unlikely to improve exercise adherence significantly.
In-situ source characterisation methods are those which involve measurements made whilst source and receiver are coupled as they would be in a real installation. Potentially in-situ source characterisation may account for the physical reality lost in the "black box" approach. There are other potential benefits such as ease of measurement. In this work a structure borne sound source is characterised using in-situ measurements of blocked force and coupled mobility. Promising results from the method have been presented previously. Further to this, an extension of the method allowing the use remote measurement positions has been developed. Using reciprocity, the extended method will further ease measurement for situations where access poses a problem. The extended method is outlined and some preliminary validation results are presented.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.