Detecting nodal activities in dynamic social networks has strategic importance in many applications, such as online marketing campaigns and homeland security surveillance. How peer-to-peer exchanges in social media can facilitate nodal activity detection is not well explored. Existing models assume network nodes to be static in time and do not adequately consider features from social theories. This research developed and validated two theory-based models, Random Interaction Model (RIM) and Preferential Interaction Model (PIM), to characterize temporal nodal activities in social media networks of human agents. The models capture the network characteristics of randomness and preferential interaction due to community size, human bias, declining connection cost, and rising reachability. The models were compared against three benchmark models (abbreviated as EAM, TAM, and DBMM) using a social media community consisting of 790,462 users who posted over 3,286,473 tweets and formed more than 3,055,797 links during 2013-2015. The experimental results show that both RIM and PIM outperformed EAM and TAM significantly in accuracy across different dates and time windows. Both PIM and RIM scored significantly smaller errors than DBMM did. Structural properties of social networks were found to provide a simple and yet accurate approach to predicting model performances. These results indicate the models' strong capability of accounting for user interactions in realworld social media networks and temporal activity detection. The research should provide new approaches for temporal network activity detection, develop relevant new measures, and report new findings from large social media datasets. CCS Concepts: • Information systems → Data analytics; • Networks → Social media networks; • Human-centered computing → Social media; • Applied computing → Marketing;
In this paper, we propose a novel transformer-based deep neural network model to learn semantic bug patterns from a corpus of buggy/fixed codes, then generate correct ones automatically. Transformer is a deep learning model relying entirely on attention mechanism to model global dependencies between input and output. Although there are a few endeavors to repair programs by learning neural language models (NLM), many special program properties, such as structure and semantics of an identifier, are not considered in embedding input sequence and designing model effectively, which results in undesired performance. In the proposed Bug-Transformer, we design a novel context abstraction mechanism to better support neural language models. Specifically, it is capable of 1) compressing code information but preserving the key structure and semantics, which provides more thorough information for NLM models, 2) renaming identifiers and literals based on their lexical scopes, structural and semantic information, to reduce code vocabulary size and 3) reserving keywords and selected idioms (domain- or developer-specific vocabularies) for better understanding code structure and semantics. Hence, Bug-Transformer adequately embeds code structural and semantic information into input data and optimize attention-based transformer neural network to well handle code features in order to improve learning tasks for bug repair. We evaluate the performance of the proposed work comprehensively on three datasets (Java code corpora) and generate patches to buggy code using a beam search decoder. The experimental results show that our proposed work outperforms the-state-of-art techniques: Bug-Transformer can successfully predict 54.81%, 34.45%, and 42.40% of the fixed code in these three datasets, respectively, which outperform the baseline models. These success rates steadily increase along with the increase of beam size. Besides, the overall syntactic correctness of all patches remains above 97%, 96%, and 50% on the three benchmarks, respectively, regardless of the beam size.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.