Abstract-Recent natural disasters have revealed that emergency networks presently cannot disseminate the necessary disaster information, making it difficult to deploy and coordinate relief operations. These disasters have reinforced the knowledge that telecommunication networks constitute a critical infrastructure of our society, and the urgency in establishing protection mechanisms against disaster-based disruptions.Hence, it is important to have emergency networks able to maintain sustainable communication in disaster areas. Moreover, the network architecture should be designed so that network connectivity is maintained among nodes outside of the impacted area, while ensuring that services for costumers not in the affected area suffer minimal impact.As a first step towards achieving disaster resilience, the RE-CODIS project was formed, and its Working Group 1 members conducted a comprehensive literature survey on "strategies for communication networks to protect against large-scale natural disasters," which is summarized in this article.Index Terms-vulnerability, end-to-end resilience, natural disasters, disaster-based disruptions.
In this paper, an efficient identity-based batch signature verification scheme is proposed for vehicular communications. With the proposed scheme, vehicles can verify a batch of signatures once instead of in a one-by-one manner. Hence the message verification speed can be tremendously increased. To identify invalid signatures in a batch of signatures, this paper adopts group testing technique, which can find the invalid signatures with few number of batch verifications. In addition, a trust authority in our scheme is capable of tracing a vehicle's real identity from its pseudo identity, and therefore conditional privacy preserving can also be achieved. Moreover, since identitybased cryptography is employed in the scheme to generate private keys for pseudo identities, certificates are not required and thus transmission overhead can be significantly reduced.
Abstract-Achieving fast and precise failure localization has long been a highly desired feature in all-optical mesh networks. M-trail (monitoring trail) has been proposed as the most general monitoring structure for achieving unambiguous failure localization (UFL) of any single link failure while effectively reducing the amount of alarm signals flooding the networks. However, it is critical to come up with a fast and intelligent m-trail design approach for minimizing the number of m-trails and the total bandwidth consumed, which ubiquitously determines the length of the alarm code and bandwidth overhead for the mtrail deployment, respectively. In this paper, the m-trail design problem is investigated. To gain a deeper understanding of the problem, we first conduct a bound analysis on the minimum length of alarm code of each link required for UFL on the most sparse (i.e., ring) and dense (i.e., fully meshed) topologies. Then, a novel algorithm based on random code assignment (RCA) and random code swapping (RCS) is developed for solving the m-trail design problem. The prototype of the algorithm can be found in [1]. The algorithm is verified by comparing to an Integer Linear Program (ILP) approach, and the results demonstrate its superiority in minimizing the fault management cost and bandwidth consumption while achieving significant reduction in computation time. To investigate the impact of topology diversity, extensive simulation is conducted on thousands of random network topologies with systematically increased network density.
In order to evaluate the expected availability of a service, a network administrator should consider all possible failure scenarios under the specific service availability model stipulated in the corresponding service-level agreement. Given the increase in natural disasters and malicious attacks with geographically extensive impact, considering only independent single link failures is often insufficient. In this paper, we build a stochastic model of geographically correlated link failures caused by disasters, in order to estimate the hazards a network may be prone to, and to understand the complex correlation between possible link failures. With such a model, one can quickly extract information, such as the probability of an arbitrary set of links to fail simultaneously, the probability of two nodes to be disconnected, the probability of a path to survive a failure, etc. Furthermore, we introduce a pre-computation process, which enables us to succinctly represent the joint probability distribution of link failures. In particular, we generate, in polynomial time, a quasilinear-sized data structure, with which the joint failure probability of any set of links can be computed efficiently.
Abstract-Shared Risk Link Group (SRLG) is a failure the network is prepared for, which contains a set of links subject to a common risk of single failure. During planning a backbone network, the list of SRLGs must be defined very carefully, because leaving out one likely failure event will significantly degrade the observed reliability of the network. Regional failures are manifested at multiple locations of the network, which are physically close to each other. In this paper we show that operators should prepare a network for only a small number of possible regional failure events. In particular, we give a fast systematic approach to generate the list of SRLGs that cover every possible circular disk failure of a given radius r . We show that this list has O((n + x)σ r ) SRLGs, where n is the number of nodes in the network, x is the number of link crossings, and σ r is the maximal number of links that could be hit by a disk failure of radius r . Finally through extensive simulations we show that this list in practice has size of ≈ 1.2n.
Abstract-Large-scale information dissemination in multicast communications has been increasingly attracting attention, be it through uptake in new services or through recent research efforts. In these, the core issues are supporting increased forwarding speed, avoiding state in the forwarding elements, and scaling in terms of the multicast tree size. This paper addresses all these challenges-which are crucial for any scalable multicast scheme to be successful-by revisiting the idea of in-packet Bloom filters and source routing. As opposed to the traditional in-packet Bloom filter concept, we build our Bloom filter by enclosing limited information about the structure of the tree. Analytical investigation is conducted and approximation formulas are provided for optimal-length Bloom filters, in which we got rid of typical Bloom filter illnesses such as false-positive forwarding. These filters can be used in several multicast implementations, which are demonstrated through a prototype. Thorough simulations are conducted to demonstrate the scalability of the proposed Bloom filters compared to its counterparts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.