Real networks exhibit heterogeneous nature with nodes playing far different roles in structure and function. To identify vital nodes is thus very significant, allowing us to control the outbreak of epidemics, to conduct advertisements for e-commercial products, to predict popular scientific publications, and so on. The vital nodes identification attracts increasing attentions from both computer science and physical societies, with algorithms ranging from simply counting the immediate neighbors to complicated machine learning and message passing approaches. In this review, we clarify the concepts and metrics, classify the problems and methods, as well as review the important progresses and describe the state of the art. Furthermore, we provide extensive empirical analyses to compare well-known methods on disparate real networks, and highlight the future directions. In despite of the emphasis on physics-rooted approaches, the unification of the language and comparison with cross-domain methods would trigger interdisciplinary solutions in the near future.
Finding an optimal subset of nodes in a network that is able to efficiently disrupt the functioning of a corrupt or criminal organization or contain an epidemic or the spread of misinformation is a highly relevant problem of network science. In this paper, we address the generalized network-dismantling problem, which aims at finding a set of nodes whose removal from the network results in the fragmentation of the network into subcritical network components at minimal overall cost. Compared with previous formulations, we allow the costs of node removals to take arbitrary nonnegative real values, which may depend on topological properties such as node centrality or on nontopological features such as the price or protection level of a node. Interestingly, we show that nonunit costs imply a significantly different dismantling strategy. To solve this optimization problem, we propose a method which is based on the spectral properties of a node-weighted Laplacian operator and combine it with a fine-tuning mechanism related to the weighted vertex cover problem. The proposed method is applicable to large-scale networks with millions of nodes. It outperforms current state-of-the-art methods and opens more directions for understanding the vulnerability and robustness of complex systems.
The robustness of complex networks under targeted attacks is deeply connected to the resilience of complex systems, which is defined as the ability to make appropriate response to the attack. In this paper, we study robustness of complex networks under a realistic assumption that the cost of removing a node is not constant but rather proportional to the degree of a node or equivalently to the number of removed links a removal action produces. We have investigated the state-of-the-art targeted node removing algorithms and demonstrate that they become very inefficient when the cost of the attack is taken into consideration. For the case when it is possible to attack or remove links, we propose a simple and efficient edge removal strategy named Hierarchical Power Iterative Normalized cut (HPI-Ncut). The results on real and artificial networks show that the HPI-Ncut algorithm outperforms all the node removal and link removal attack algorithms when the same definition of cost is taken into consideration. In addition, we show that, on sparse networks, the complexity of this hierarchical power iteration edge removal algorithm is only ( log 2+ ).
Recommender systems use the historical activities and personal profiles of users to uncover their preferences and recommend objects. Most of the previous methods are based on objects' (and/or users') similarity rather than on their difference. Such approaches are subject to a high risk of increasingly exposing users to a narrowing band of popular objects. As a result, a few objects may be recommended to an enormous number of users, resulting in the problem of recommendation congestion, which is to be avoided, especially when the recommended objects are limited resources. In order to quantitatively measure a recommendation algorithmʼs ability to avoid congestion, we proposed a new metric inspired by the Gini index, which is used to measure the inequality of the individual wealth distribution in an economy. Besides this, a new recommendation method called directed weighted conduction (DWC) was developed by considering the heat conduction process on a user-object bipartite network with different thermal conductivities. Experimental results obtained for three benchmark data sets showed that the DWC algorithm can effectively avoid system congestion, and greatly improve the novelty and diversity, while retaining relatively high accuracy, in comparison with the state-of-the-art methods.Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. In comparison to the traditional tools, such as search engines [4] (which require precise keywords) and portals (where the contents are classified by topics), recommender systems provide us with a different way to filter information and return personalized results to different users. Recommender systems [5] use the historical track record of users' activities and possibly personal profiles to uncover their preferences and tastes, and, on the basis on these, recommendations are made. These techniques have already found wide applications: sellers carefully study users' previous purchases and recommend other products that the users may like -to enhance their sales (e.g., Amazon, www.amazon.com); social websites analyze users' contact information-to help them find new friends and get them hooked into various sites (e.g., Facebook, www.facebook.com); and online radio stations remember skipped songs-in order to serve users better in the future (e.g., Pandora, www.pandora.com). In general, whenever there are plenty of diverse products and customers are not alike, personalized recommendation may help to deliver the right content to the right person.Although originally a research field dominated by computer scientists, the study of recommender systems has attracted much attention from researchers in other disparate realms and has now also become a topic of interest for mathematicians, physicists and psychologists. Accordingly, various kinds of recommendation algorithms have been proposed, including collaborative filte...
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.