Clinical and environmental meta-omics studies are accumulating an ever-growing amount of microbial abundance data over a wide range of ecosystems. With a sufficiently large sample number, these microbial communities can be explored by constructing and analyzing co-occurrence networks, which detect taxon associations from abundance data and can give insights into community structure. Here, we investigate how co-occurrence networks differ across biomes and which other factors influence their properties. For this, we inferred microbial association networks from 20 different 16S rDNA sequencing data sets and observed that soil microbial networks harbor proportionally fewer positive associations and are less densely interconnected than host-associated networks. After excluding sample number, sequencing depth and beta-diversity as possible drivers, we found a negative correlation between community evenness and positive edge percentage. This correlation likely results from a skewed distribution of negative interactions, which take place preferentially between less prevalent taxa. Overall, our results suggest an under-appreciated role of evenness in shaping microbial association networks.
Deep learning presents an efficient set of methods that allow learning from massive volumes of data using complex deep neural networks. To facilitate the design and implementation of algorithms, deep learning frameworks provide a high-level programming interface. Based on these frameworks, new models, and applications are able to make better and better predictions. One type of deep learning application is the Internet of Things that can gather a continuous flow of data, which causes an explosion of the amount of data. Therefore, to handle this data management issue, computation technologies can offer new perspectives to analyze more data with more complex models.In this context, a cluster of computers can operate to quickly deliver a model or to enable the design of a complex neural network spread among computers. An alternative is to distribute a deep learning task with HPC cloud computing resources and to scale cluster in order to quickly and efficiently train a neural network. As a first step to design an infrastructure aware framework which is able to scale the computing nodes, this work aims to review and analyze the state-of-the-art frameworks by collecting device utilization data during the training task. We gather information about the CPU, RAM and the GPU utilization on deep learning algorithms with and without multi-threading. The behavior of each framework is discussed and analyzed in order to shed light on the strengths and weaknesses of the different deep learning frameworks.
During the last years, deep learning (DL) models have been used in several applications with large datasets and complex models. These applications require methods to train models faster, such as distributed deep learning (DDL). This paper proposes an empirical approach aiming to measure the speedup of DDL achieved by using different parallelism strategies on the nodes. Local parallelism is considered quite important in the design of a time-performing multi-node architecture because DDL depends on the time required by all the nodes. The impact of computational resources (CPU and GPU) is also discussed since the GPU is known to speed up computations. Experimental results show that the local parallelism impacts the global speedup of the DDL depending on the neural model complexity and the size of the dataset. Moreover, our approach achieves a better speedup than Horovod.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.