Data dissemination is a fundamental task in wireless sensor networks. Because of the radio range limitation and energy consumption constraint, sensor data is commonly disseminated in a multihop fashion (flat networks) through a tree topology. However, to the best of our knowledge none of the current solutions deals with the moment when the dissemination topology needs to be rebuilt. This work addresses such a problem introducing the use of information fusion mechanisms, where the traffic is handled as a signal that is filtered and translated into evidences that indicate the likelihood of critical failures occurrence. These evidences are combined by a Dempster-Shafer engine to detect the need for a topology reconstruction. Our solution, called Topology Rebuilding Algorithm (TRA), is evaluated through a set of simulations. In the experiments, TRA showed to be efficient in avoiding unnecessary topology reconstructions. Compared to the periodic rebuilding, in some cases, TRA could reduce the traffic overhead in nearly 35% of the traffic produced by the periodicu rebuilding.
Target tracking is an important application of wireless sensor networks. The networks' ability to locate and track an object is directed linked to the nodes' ability to locate themselves. Consequently, localization systems are essential for target tracking applications. In addition, sensor networks are often deployed in remote or hostile environments. Therefore, density control algorithms are used to increase network lifetime while maintaining its sensing capabilities. In this work, we analyze the impact of localization algorithms (RPE and DPE) and density control algorithms (GAF, A3 and OGDC) on target tracking applications. We adapt the density control algorithms to address the k-coverage problem. In addition, we analyze the impact of network density, residual integration with density control, and k-coverage on both target tracking accuracy and network lifetime. Our results show that DPE is a better choice for target tracking applications than RPE. Moreover, among the evaluated density control algorithms, OGDC is the best option among the three. Although the choice of the density control algorithm has little impact on the tracking precision, OGDC outperforms GAF and A3 in terms of tracking time.
One challenging issue in information science, biological systems, and many other fields is determining the most central or relevant networked systems agents. These networks usually describe scenarios using nodes (objects) and edges (the objects' relations). The so-called standard centrality measures aim to solve this kind of challenge, ranking the nodes by their supposed relevance and elect the most relevant nodes. This problem becomes more challenging when one single network is not enough to depict the whole scenario. In these cases, we can work with multiplex networks characterized by a set of network layers, each describing interrelationships that can change depending on external factors, e.g., time. This paper proposes a new centrality measure, the Group-based Centrality for Undirected Multiplex Networks, to find the most relevant nodes in an undirected multiplex network. As a case study, we use a Brazilian corruption investigation known as the Car Wash Operation. Our proposed centrality outperforms well-known centrality methods such as betweenness, eigenvector, weighted degree, Multiplex PageRank, closeness, and crosslayer degree centrality.INDEX TERMS group-based centrality measures, multiplex centrality measures, multiplex networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.