Recent research has highlighted limitations of studying complex systems with time-varying topologies from the perspective of static, time-aggregated networks. Non-Markovian characteristics resulting from the ordering of interactions in temporal networks were identified as one important mechanism that alters causality and affects dynamical processes. So far, an analytical explanation for this phenomenon and for the significant variations observed across different systems is missing. Here we introduce a methodology that allows to analytically predict causality-driven changes of diffusion speed in non-Markovian temporal networks. Validating our predictions in six data sets we show that compared with the time-aggregated network, non-Markovian characteristics can lead to both a slow-down or speed-up of diffusion, which can even outweigh the decelerating effect of community structures in the static topology. Thus, non-Markovian properties of temporal networks constitute an important additional dimension of complexity in time-varying complex systems.
Many community detection algorithms have been developed to uncover the mesoscopic properties of complex networks. However how good an algorithm is, in terms of accuracy and computing time, remains still open. Testing algorithms on real-world network has certain restrictions which made their insights potentially biased: the networks are usually small, and the underlying communities are not defined objectively. In this study, we employ the Lancichinetti-Fortunato-Radicchi benchmark graph to test eight state-of-the-art algorithms. We quantify the accuracy using complementary measures and algorithms’ computing time. Based on simple network properties and the aforementioned results, we provide guidelines that help to choose the most adequate community detection algorithm for a given network. Moreover, these rules allow uncovering limitations in the use of specific algorithms given macroscopic network properties. Our contribution is threefold: firstly, we provide actual techniques to determine which is the most suited algorithm in most circumstances based on observable properties of the network under consideration. Secondly, we use the mixing parameter as an easily measurable indicator of finding the ranges of reliability of the different algorithms. Finally, we study the dependency with network size focusing on both the algorithm’s predicting power and the effective computing time.
What is the role of social interactions in the creation of price bubbles? Answering this question requires obtaining collective behavioural traces generated by the activity of a large number of actors. Digital currencies offer a unique possibility to measure socio-economic signals from such digital traces. Here, we focus on Bitcoin, the most popular cryptocurrency. Bitcoin has experienced periods of rapid increase in exchange rates (price) followed by sharp decline; we hypothesize that these fluctuations are largely driven by the interplay between different social phenomena. We thus quantify four socio-economic signals about Bitcoin from large datasets: price on online exchanges, volume of word-of-mouth communication in online social media, volume of information search and user base growth. By using vector autoregression, we identify two positive feedback loops that lead to price bubbles in the absence of exogenous stimuli: one driven by word of mouth, and the other by new Bitcoin adopters. We also observe that spikes in information search, presumably linked to external events, precede drastic price declines. Understanding the interplay between the socio-economic signals we measured can lead to applications beyond cryptocurrencies to other phenomena that leave digital footprints, such as online social network usage.
Many community detection algorithms have been developed to uncover the mesoscopic properties of complex networks. However how good an algorithm is, in terms of accuracy and computing time, remains still open. Testing algorithms on real-world network has certain restrictions which made their insights potentially biased: the networks are usually small, and the underlying communities are not defined objectively. In this study, we employ the Lancichinetti-Fortunato-Radicchi benchmark graph to test eight state-of-the-art algorithms. We quantify the accuracy using complementary measures and algorithms' computing time. Based on simple network properties and the aforementioned results, we provide guidelines that help to choose the most adequate community detection algorithm for a given network. Moreover, these rules allow uncovering limitations in the use of specific algorithms given macroscopic network properties. Our contribution is threefold: firstly, we provide actual techniques to determine which is the most suited algorithm in most circumstances based on observable properties of the network under consideration. Secondly, we use the mixing parameter as an easily measurable indicator of finding the ranges of reliability of the different algorithms. Finally, we study the dependency with network size focusing on both the algorithm's predicting power and the effective computing time.Relationships between constituents of complex systems (be it in nature, society, or technological applications) can be represented in terms of networks. In this portrayal, the elements composing the system are described as nodes and their interactions as links. At the global level, the topology of these interactions -far from being trivial -is in itself of complex nature 1,2 . Importantly, these networks further display some level of organisation at an intermediate scale. At this mesoscopic level, it is possible to identify groups of nodes that are heavily connected among themselves, but sparsely connected to the rest of the network. These interconnected groups are often characterised as communities, or in other contexts modules, and occur in a wide variety of networked systems 3,4 .Detecting communities has grown into a fundamental, and highly relevant problem in network science with multiple applications. First, it allows to unveil the existence of a non-trivial internal network organisation at coarse grain level. This allows further to infer special relationships between the nodes that may not be easily accessible from direct empirical tests 5 . Second, it helps to better understand the properties of dynamic processes taking place in a network. As paradigmatic examples, spreading processes of epidemics and innovation are considerably affected by the community structure of the graph 6 .Taking into account its importance, it is not surprising that many community detection methods have been developed, using tools and techniques from variegated disciplines such as statistical physics, biology, applied mathematics, computer science, and socio...
We present conclusive evidence showing that different sources of diversity, such as those represented by quenched disorder or noise, can induce a resonant collective behavior in an ensemble of coupled bistable or excitable systems. Our analytical and numerical results show that when such systems are subjected to an external subthreshold signal, their response is optimized for an intermediate value of the diversity. These findings show that intrinsic diversity might have a constructive role and suggest that natural systems might profit from their diversity in order to optimize the response to an external stimulus.
A comparative study across the most widely known blockchain technologies is conducted with a bottom-up approach. Blockchains are disentangled into building blocks. Each building block is then hierarchically classified in main and subcomponents. Then, alternative layouts for the subcomponents are identified and compared between them. Finally, a taxonomy tree summarises the study and provides a navigation tool across different blockchain architectural configurations.The solution to these problems requires the setting up of software reference architectures where standardised structures and respective elements and relations shall provide templates for concrete blockchain architectures. Standards can emerge naturally because of market adoption (industry driven) or because imposed by institutes and organisations. In the first group we may include initiatives like the Accord Project 1 , the ChinaLedger 2 or R3 3 . In the second group we may refer to the initiative conducted by the International Organization for Standardization (ISO) with the establishment of the technical committee ISO/TC 307 on Blockchain and distributed ledger technologies. Several working groups with different topics to discuss have been settled. In particular, the ISO/TC 307/WG1 working group is engaged with the reference architecture, taxonomy and ontology. Overall, a long-term standardisation of the blockchain reference architecture will benefit every industry. Thus, a standard for software reference architecture is necessary in order to enable a level playing field where every industry player and community member can design and adopt blockchainenabled products or services under the same very conditions with possibility of data exchange. As it is for the Internet, several institutes of standardisations (e.g., ETF in cooperation with the W3C, ISO/IEC, ITU) set a body of standards. Internet standards promote interoperability of systems on the Internet by defining precise protocols, message formats, schemas, and languages. As a result, different hardware and software can seamlessly interact and work together. Applied to World Wide Web (as a layer on the top of the Internet), standards bring interoperability, accessibility and usability of web pages. Similarly, the adoption of blockchain standards will promote the blossoming and proliferation of interoperable blockchain-enabled applications. Thus, if we envisage a future where blockchains will be one of the pillars of our society's development, it is necessary to begin discussing and identifying standards for blockchain reference architectures. The aim of this study is to highlight the need for standard technical reference models of blockchain architectures. This is timely aligned with the industry sentiment which currently pushes organisations for standardisation to set industry standards. In order to support an appropriate co-regulatory framework for blockchain-related industries, a multi-party approach is necessary as it is for the Internet where both national standards, international standards and...
We develop a dynamic network formation model that can explain the observed nestedness in real-world networks. Links are formed on the basis of agents' centrality and have an exponentially distributed lifetime. We use stochastic stability to identify the networks to which the network formation process converges and find that they are nested split graphs. We completely determine the topological properties of the stochastically stable networks and show that they match features exhibited by real-world networks. Using four different network data sets, we empirically test our model and show that it fits well the observed networks. 6 Mele (2010) and Liu et al. (2012) provide interesting dynamic network formation models where individuals decide with whom to form links by maximizing a utility function. However, contrary to our model, these papers do not characterize analytically the degree distribution and the resulting network statistics. 7 See Jackson and Zenou (2014), for a recent overview of this literature. 8 Bramoullé and Kranton (2007), Bramoullé et al. (2014), and Galeotti et al. (2010) are also important papers in this literature. The first paper focuses on strategic substitutabilities, while the second one provides a general framework for solving any game on networks with perfect information and linear best-reply functions. The last paper investigates the case when agents do not have perfect information about the network. Because of its tractability, in the present paper, we use the model of Ballester et al. (2006), who analyze a network game of local complementarities under perfect information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.