Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.
Research on the use of social networks for health-related purposes is limited. This study aims to characterize the purpose and use of Facebook and Twitter groups concerning colorectal cancer, breast cancer, and diabetes. We searched in Facebook ( www.facebook.com ) and Twitter ( www.twitter.com ) using the terms "colorectal cancer," "breast cancer," and "diabetes." Each important group has been analyzed by extracting its network name, number of members, interests, and Web site URL. We found 216 breast cancer groups, 171 colorectal cancer groups, and 527 diabetes groups on Facebook and Twitter. The largest percentage of the colorectal cancer groups (25.58%) addresses prevention, similarly to breast cancer, whereas diabetes groups are mainly focused on research issues (25.09%). There are more social groups about breast cancer and diabetes on Facebook (around 82%) than on Twitter (around 18%). Regarding colorectal cancer, the difference is less: Facebook had 62.23%, and Twitter 31.76%. Social networks are a useful tool for supporting patients suffering from these three diseases. Regarding the use of these social networks for disease support purposes, Facebook shows a higher usage rate than Twitter, perhaps because Twitter is newer than Facebook, and its use is not so generalized.
In this paper, we present a fully automatic brain tumor segmentation and classification model using a Deep Convolutional Neural Network that includes a multiscale approach. One of the differences of our proposal with respect to previous works is that input images are processed in three spatial scales along different processing pathways. This mechanism is inspired in the inherent operation of the Human Visual System. The proposed neural model can analyze MRI images containing three types of tumors: meningioma, glioma, and pituitary tumor, over sagittal, coronal, and axial views and does not need preprocessing of input images to remove skull or vertebral column parts in advance. The performance of our method on a publicly available MRI image dataset of 3064 slices from 233 patients is compared with previously classical machine learning and deep learning published methods. In the comparison, our method remarkably obtained a tumor classification accuracy of 0.973, higher than the other approaches using the same database.
In their goal to effectively manage the use of existing infrastructures, intelligent transportation systems require precise forecasting of near‐term traffic volumes to feed real‐time analytical models and traffic surveillance tools that alert of network links reaching their capacity. This article proposes a new methodological approach for short‐term predictions of time series of volume data at isolated cross sections. The originality in the computational modeling stems from the fit of threshold values used in the stationary wavelet‐based denoising process applied on the time series, and from the determination of patterns that characterize the evolution of its samples over a fixed prediction horizon. A self‐organizing fuzzy neural network is optimized in its configuration parameters for learning and recognition of these patterns. Four real‐world data sets from three interstate roads are considered for evaluating the performance of the proposed model. A quantitative comparison made with the results obtained by four other relevant prediction models shows a favorable outcome.
Many of the next generation of adaptive optics systems on large and extremely large telescopes require tomographic techniques in order to correct for atmospheric turbulence over a large field of view. Multi-object adaptive optics is one such technique. In this paper, different implementations of a tomographic reconstructor based on a machine learning architecture named “CARMEN” are presented. Basic concepts of adaptive optics are introduced first, with a short explanation of three different control systems used on real telescopes and the sensors utilised. The operation of the reconstructor, along with the three neural network frameworks used, and the developed CUDA code are detailed. Changes to the size of the reconstructor influence the training and execution time of the neural network. The native CUDA code turns out to be the best choice for all the systems, although some of the other frameworks offer good performance under certain circumstances.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.