Autonomic communications seek to improve the ability of network and services to cope with unpredicted change, including changes in topology, load, task, the physical and logical characteristics of the networks that can be accessed, and so forth. Broad-ranging autonomic solutions require designers to account for a range of end-to-end issues affecting programming models, network and contextual modeling and reasoning, decentralised algorithms, trust acquisition and maintenance---issues whose solutions may draw on approaches and results from a surprisingly broad range of disciplines. We survey the current state of autonomic communications research and identify significant emerging trends and techniques.
In this paper, we analyze the behavior of packet-switched communication networks in which packets arrive dynamically at the nodes and are routed in discrete time steps across the edges. We This work was supported by Army grant DAAH 04-95-1-0607 and ARPA contract N00014-95-1-1246.A preliminary version of this work appeared in focus on a basic adversarial model of packet arrival and path determination for which the time-averaged arrival rate of packets requiring the use of any edge is limited to be less than 1. This model can reflect the behavior of connection-oriented networks with transient connections (such as ATM networks) as well as connectionless networks (such as the Internet).We concentrate on greedy (also known as work-conserving) contention-resolution protocols. A crucial issue that arises in such a setting is that of stability-will the number of packets in the system remain bounded, as the system runs for an arbitrarily long period of time? We study the universal stability of networks (i.e., stability under all greedy protocols) and universal stability of protocols (i.e., stability in all networks). Once the stability of a system is granted, we focus on the two main parameters that characterize its performance: maximum queue size required and maximum end-toend delay experienced by any packet.Among other things, we show:(i) There exist simple greedy protocols that are stable for all networks.(ii) There exist other commonly used protocols (such as FIFO) and networks (such as arrays and hypercubes) that are not stable.(iii) The n-node ring is stable for all greedy routing protocols (with maximum queue-size and packet delay that is linear in n).(iv) There exists a simple distributed randomized greedy protocol that is stable for all networks and requires only polynomial queue size and polynomial delay.Our results resolve several questions posed by Borodin et al., and provide the first examples of (i) a protocol that is stable for all networks, and (ii) a protocol that is not stable for all networks.
For earthquake simulations to play an important role in the reduction of seismic risk, they must be capable of high resolution and high fidelity. We have developed algorithms and tools for earthquake simulation based on multiresolution hexahedral meshes. We have used this capability to carry out 1 Hz simulations of the 1994 Northridge earthquake in the LA Basin using 100 million grid points. Our wave propagation solver sustains 1.21 teraflop/s for 4 hours on 3000 AlphaServer processors at 80% parallel efficiency. Because of uncertainties in characterizing earthquake source and basin material properties, a critical remaining challenge is to invert for source and material parameter fields for complex 3D basins from records of past earthquakes. Towards this end, we present results for material and source inversion of high-resolution models of basins undergoing antiplane motion using parallel scalable inversion algorithms that overcome many of the difficulties particular to inverse heterogeneous wave propagation problems. Abstract. For earthquake simulations to play an important role in the reduction of seismic risk, they must be capable of high resolution and high fidelity. We have developed algorithms and tools for earthquake simulation based on multiresolution hexahedral meshes. We have used this capability to carry out 1 Hz simulations of the 1994 Northridge earthquake in the LA Basin using 100 million grid points. Our wave propagation solver sustains 1.21 teraflop/s for 4 hours on 3000 AlphaServer processors at 80% parallel efficiency. Because of uncertainties in characterizing earthquake source and basin material properties, a critical remaining challenge is to invert for source and material parameter fields for complex 3D basins from records of past earthquakes. Towards this end, we present results for material and source inversion of high-resolution models of basins undergoing antiplane motion using parallel scalable inversion algorithms that overcome many of the difficulties particular to inverse heterogeneous wave propagation problems. Author(s)Volkan
The concept of unreliable failure detector was introduced by Chandra and Toueg [2/ as a mechanism that provides information about process failures. Depending on the properties the failure detectors guarantee, they proposed a taxonomy of failure detectors. It has been shown that one of the classes of this taxonomy, namely Eventually Strong (OS), is the weakest class allowing to solve the Consensus problem.In this paper; we present a new algorithm implementing OS. Our algorithm guarantees that eventually all the correct processes agree on a common correct process. This property trivially allows us to provide the accuracy and completeness properties required by OS. We show, then, that our algorithm is better than any other proposed implementation of O S in terms of the number of messages and the total amount of information periodically sent. In particular; previous algorithms require to periodically exchange at least a quadratic amount of information, while ours only requires O ( n log n ) (where n is the number ofprocesses).However; we also propose a new measure to evaluate the eficiency of this kind of algorithms, the eventual monitoring degree, which does not rely on a periodic behavior and expresses better the degree of processing required by the algorithms. We show that the runs of our algorithm have optimal eventual monitoring degree.of failure detectors they defined, namely Eventually Strong (OS), is the weakest class allowing to solve Consensus'.Since then, many distributed fault-tolerant algorithms have been designed based on Chandra-Toueg's unreliable failure detectors [5,6,9, 111. Almost all of them consider a system model in which the failure detector they require is available, i.e., an asynchronous system augmented with a failure detector, such that the algorithm is designed on top of it. In this work we have taken a different approach, investigating how to implement those unreliable failure detectors.From the results of Fischer et al. and those of Chandra and Toueg, it can be derived the impossibility of implementing failure detectors strong enough to solve the distributed Consensus problem in a pure asynchronous system. In [2], Chandra and Toueg presented a timeout-based algorithm implementing an Eventually Perfect ( O P ) failure detector -a class strictly stronger than OS-in models of partial synchrony [3]. This algorithm is based o n all-to-all communication: each process periodically sends an I-AM-ALIVE message to all processes, in order to inform them that it has not crashed, and thus requires a quadratic number of messages to be periodically sent. More recently, Larrea et al. [7] proposed more efficient algorithms implementing several classes of failure detectors, including O S and OP.These algorithms are based on a ring arrangement of the processes, and require only a linear number of messages to be periodically sent.
Peer to peer (P2P) systems are moving from application specific architectures to a generic service oriented design philosophy. This raises interesting problems in connection with providing useful P2P middleware services that are capable of dealing with resource assignment and management in a large-scale, heterogeneous and unreliable environment. One such service, the slicing service, has been proposed to allow for an automatic partitioning of P2P networks into groups (slices) that represent a controllable amount of some resource and that are also relatively homogeneous with respect to that resource, in the face of churn and other failures. In this report we propose two algorithms to solve the distributed slicing problem. The first algorithm improves upon an existing algorithm that is based on gossip-based sorting of a set of uniform random numbers. We speed up convergence via a heuristic for gossip peer selection. The second algorithm is based on a different approach: statistical approximation of the rank of nodes in the ordering. The scalability, efficiency and resilience to dynamics of both algorithms relies on their gossip-based models. We present theoretical and experimental results to prove the viability of these algorithms. Key-words:Slicing, Gossip, Slice, Churn, Peer-to-Peer, Aggregation, Large Scale, Resource Allocation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.