In this work, we present a methodology to identify COVID-19 spreaders using the analysis of the relationship between socio-cultural and economic characteristics with the number of infections and deaths caused by the COVID-19 virus in different countries. For this, we analyze the information of each country using the complex networks approach, specifically by analyzing the spreaders countries based on the separator set in 5-layer multiplex networks. The results show that, we obtain a classification of the countries based on their numerical values in socioeconomics, population, Gross Domestic Product (GDP), health and air connections; where, in the spreader set there are those countries that have high, medium or low values in the different characteristics; however, the aspect that all the countries belonging to the separator set share is a high value in air connections. INDEX TERMS Complex networks, complex systems, COVID-19, multiplex networks, optimization, social networks.
We study the correlation properties of word lengths in large texts from 30 ebooks in the English language from the Gutenberg Project (www.gutenberg.org) using the natural visibility graph method (NVG). NVG converts a time series into a graph and then analyzes its graph properties. First, the original sequence of words is transformed into a sequence of values containing the length of each word, and then, it is integrated. Next, we apply the NVG to the integrated word-length series and construct the network. We show that the degree distribution of that network follows a power law, P (k) ∼ k −γ , with two regimes, which are characterized by the exponents γ s ≈ 1.7 (at short degree scales) and γ l ≈ 1.3 (at large degree scales). This suggests that word lengths are much more strongly correlated at large distances between words than at short distances between words. That finding is also supported by the detrended fluctuation analysis (DFA) and recurrence time distribution. These results provideEntropy 2015, 177799 new information about the universal characteristics of the structure of written texts beyond that given by word frequencies.
Inverse percolation is known as the problem of finding the minimum set of nodes whose elimination of their links causes the rupture of the network. Inverse percolation has been widely used in various studies of single-layer networks. However, the use and generalization of multiplex networks have been little considered. In this work, we propose a methodology based on inverse percolation to quantify the robustness of multiplex networks. Specifically, we present a modified version of the mathematical model for the multiplex-vertex separator problem (m-VSP). By solving the m-VSP, we can find nodes that cause the rupture of the mutually connected giant component (MCGC) and the large viable cluster (LVC) when their links are removed from the network. The methodology presented in this work was tested in a set of benchmark networks, and as case study, we present an analysis using a set of multiplex social networks modeled with information about the main characteristics of the best universities in the world and the universities in Mexico. The results show that the methodology presented in this work can work in different models and types of 2- and 3-layer multiplex networks without dividing the entire multiplex network into single-layer as some techniques described in the specific literature. Furthermore, thanks to the fact that the technique does not require the calculation of some structural measure or centrality metric, and it is easy to scale for networks of different sizes.
Abstract:In the present work, we quantify the irregularity of different European languages belonging to four linguistic families (Romance, Germanic, Uralic and Slavic) and an artificial language (Esperanto). We modified a well-known method to calculate the approximate and sample entropy of written texts. We find differences in the degree of irregularity between the families and our method, which is based on the search of regularities in a sequence of symbols, and consistently distinguishes between natural and synthetic randomized texts. Moreover, we extended our study to the case where multiple scales are accounted for, such as the multiscale entropy analysis. Our results revealed that real texts have non-trivial structure compared to the ones obtained from randomization procedures.
The study of natural language using a network approach has made it possible to characterize novel properties ranging from the level of individual words to phrases or sentences. A natural way to quantitatively evaluate similarities and differences between spoken and written language is by means of a multiplex network defined in terms of a similarity distance between words. Here, we use a multiplex representation of words based on orthographic or phonological similarity to evaluate their structure. We report that from the analysis of topological properties of networks, there are different levels of local and global similarity when comparing written vs. spoken structure across 12 natural languages from 4 language families. In particular, it is found that differences between the phonetic and written layers is markedly higher for French and English, while for the other languages analyzed, this separation is relatively smaller. We conclude that the multiplex approach allows us to explore additional properties of the interaction between spoken and written language.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.