We study functional activity in the human brain using functional magnetic resonance imaging and recently developed tools from network science. The data arise from the performance of a simple behavioural motor learning task. Unsupervised clustering of subjects with respect to similarity of network activity measured over 3 days of practice produces significant evidence of 'learning', in the sense that subjects typically move between clusters (of subjects whose dynamics are similar) as time progresses.
This work explores simulations of polarized discussions from a general and theoretical premise. Specifically the question of whether a plausible avenue exists for a subgroup in an online social network to find a disagreement beneficial and what that benefit could be. A methodological framework is proposed which represents key factors that drives social media engagement including the iterative accumulation of influence and the dynamics for the asymmetric treatment of messages during a disagreement. It is shown that prior to a polarization event a trend towards a more uniform distribution of relative influence is achieved which is then reversed by the polarization event. The reasons for this reversal are discussed and how it has a plausible analogue in real world systems. A pair of inoculation strategies are proposed which aim at returning the trend towards uniform influence across users while refraining from violating user privacy (by remaining topic agnostic) and from user removal operations.
Online human interactions take place within a dynamic hierarchy, where social influence is determined by qualities such as status, eloquence, trustworthiness, authority and persuasiveness. In this work, we consider topic-based twitter interaction networks, and address the task of identifying influential players. Our motivation is the strong desire of many commercial entities to increase their social media presence by engaging positively with pivotal bloggers and tweeters. After discussing some of the issues involved in extracting useful interaction data from a twitter feed, we define the concept of an active node subnetwork sequence. This provides a time-dependent, topic-based, summary of relevant twitter activity. For these types of transient interactions, it has been argued that the flow of information, and hence the influence of a node, is highly dependent on the timing of the links. Some nodes with relatively small bandwidth may turn out to be key players because of their prescience and their ability to instigate follow-on network activity. To simulate a commercial application, we build an active node subnetwork sequence based on key words in the area of travel and holidays. We then compare a range of network centrality measures, including a recently proposed version that accounts for the arrow of time, with respect to their ability to rank important nodes in this dynamic setting. The centrality rankings use only connectivity information (who tweeted whom, when), without requiring further information about the account type or message content, but if we post-process the results by examining account details, we find that the time-respecting, dynamic approach, which looks at the follow-on flow of information, is less likely to be ‘misled’ by accounts that appear to generate large numbers of automatic tweets with the aim of pushing out web links. We then benchmark these algorithmically derived rankings against independent feedback from five social media experts, given access to the full tweet content, who judge twitter accounts as part of their professional duties. We find that the dynamic centrality measures add value to the expert view, and can be hard to distinguish from an expert in terms of who they place in the top ten. These algorithms, which involve sparse matrix linear system solves with sparsity driven by the underlying network structure, can be applied to very large-scale networks. We also test an extension of the dynamic centrality measure that allows us to monitor the change in ranking, as a function of time, of the twitter accounts that were eventually deemed influential
Abstract:The Eurovision Song Contest (ESC) is an annual event which attracts millions of viewers. It is an interesting activity to examine since the participants of the competition represent a particular country's musical performance that will be awarded a set of scores from other participating countries based upon a quality assessment of a performance. There is a question of whether the countries will vote exclusively according to the artistic merit of the song, or if the vote will be a public signal of national support for another country. Since the competition aims to bring people together, any consistent biases in the awarding of scores would defeat the purpose of the celebration of expression and this has attracted researchers to investigate the supporting evidence for biases. This paper builds upon an approach which produces a set of random samples from an unbiased distribution of score allocation, and extends the methodology to use the full set of years of the competition's life span which has seen fundamental changes to the voting schemes adopted. By building up networks from statistically significant edge sets of vote allocations during a set of years, the results display a plausible network for the origins of the culture anchors for the preferences of the awarded votes. With years of data, the results support the hypothesis of regional collusion and biases arising from proximity, culture and other irrelevant factors in regards to the music which that alone is intended to a ect the judgment of the contest.
From many datasets gathered in online social networks, well defined community structures have been observed. A large number of users participate in these networks and the size of the resulting graphs poses computational challenges. There is a particular demand in identifying the nodes responsible for information flow between communities; for example, in temporal Twitter networks edges between communities play a key role in propagating spikes of activity when the connectivity between communities is sparse and few edges exist between different clusters of nodes. The new algorithm proposed here is aimed at revealing these key connections by measuring a node's vicinity to nodes of another community. We look at the nodes which have edges in more than one community and the locality of nodes around them which influence the information received and broadcasted to them. The method relies on independent random walks of a chosen fixed number of steps, originating from nodes with edges in more than one community. For the large networks that we have in mind, existing measures such as betweenness centrality are difficult to compute, even with recent methods that approximate the large number of operations required. We therefore design an algorithm that scales up to the demand of current big data requirements and has the ability to harness parallel processing capabilities. The new algorithm is illustrated on synthetic data, where results can be judged carefully, and also on a real, large scale Twitter activity data, where new insights can be gained.
We develop and test an intuitively simple dynamic network model to describe the type of time-varying connectivity structure present in many technological settings. The model assumes that nodes have an inherent hirerarchy governing the emergence of new connections. This idea draws on newly established concepts in on-line human behavior concerning the existence of discussion catalysts, who initiate long threads, and on-line leaders, who trigger feedback. We show that the model captures an important property found in email and voice call data-'dynamic communicators' with sufficient foresight or impact to generate effective links have an influence that is grossly underestimated by static measures based on snaphots or aggregated data.
Governments, policy makers, and officials around the globe are working to mitigate the effects of the COVID-19 pandemic by making decisions that strive to save the most lives and impose the least economic costs. Making these decisions require comprehensive understanding of the dynamics by which the disease spreads. In traditional epidemiological models, individuals do not adapt their contact behavior during an epidemic, yet adaptive behavior is well documented (i.e., fear-induced social distancing). In this work we revisit Epstein’s “coupled contagion dynamics of fear and disease” model in order to extend and adapt it to explore fear-driven behavioral adaptations and their impact on efforts to combat the COVID-19 pandemic. The inclusion of contact behavior adaptation endows the resulting model with a rich dynamics that under certain conditions reproduce endogenously multiple waves of infection. We show that the model provides an appropriate test bed for different containment strategies such as: testing with contact tracing and travel restrictions. The results show that while both strategies could result in flattening the epidemic curve and a significant reduction of the maximum number of infected individuals; testing should be applied along with tracing previous contacts of the tested individuals to be effective. The results show how the curve is flattened with testing partnered with contact tracing, and the imposition of travel restrictions.
The Schelling model of segregation has been shown to have a simulation trace which decreases the entropy of its states as the aggregate number of residential agents surrounded by a threshold of equally labeled agents increases. This introduces a paradox which goes against the second law of thermodynamics that states how entropy must increase. In the efforts to bring principles of physics into the modeling of sociological phenomena this must be addressed. A modification of the model is introduced where a monetary variable is provided to the residential agents (sampled from reported income data), and a dynamic which acts upon this variable when an agent changes its location on the grid. The entropy of the simulation over the iterations is estimated in terms of the aggregate residential homogeneity and the aggregate income homogeneity. The dynamic on the monetary variable shows that it can increase the entropy of the states over the simulation. The path of the traces with both variables in the results show that the shape of the region of entropy is followed supporting that the decrease of entropy due to the residential clustering has a parallel and independent effect increasing the entropy via the monetary variable.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.