Networking Named Content (NNC) was recently proposed as a new networking paradigm to realise Content Centric Networks (CCNs). The new paradigm changes much about the current Internet, from security and content naming and resolution, to caching at routers, and new flow models. In this paper, we study the caching part of the proposed networking paradigm in isolation from the rest of the suggested features. In CCNs, every router caches packets of content and reuses those that are still in the cache, when subsequently requested. It is this caching feature of CCNs that we model and evaluate in this paper. Our modelling proceeds both analytically and by simulation. Initially, we develop a mathematical model for a single router, based on continuous time Markov-chains, which assesses the proportion of time a given piece of content is cached. This model is extended to multiple routers with some simple approximations. The mathematical model is complemented by simulations which look at the caching dynamics, at the packet-level, in isolation from the rest of the flow.
Abstract-Management operations performed by Content Delivery Network (CDN) providers consist mainly in controlling the placement of contents at different storage locations and deciding where to serve client requests from. Configuration decisions are usually taken by using only limited information about the carrier networks, and this can adversely affect network usage. In this work we propose an approach by which ISPs can have more control over their resources. This involves the deployment of caching points within their network, which can allow them to implement their own content placement strategies. The work presented in this paper investigates lightweight strategies that can be used by the ISPs to manage the placement of contents in the various network caching locations according to user demand characteristics. The proposed strategies differ in terms of the volume and nature of the information required to determine the new caching configurations. We evaluate the performance of the proposed strategies, in terms of network resource utilization, based on a wide range of user demand profiles and we compare the obtained performance according to metrics we define to characterize the demand. The results demonstrate that the proposed metrics can provide useful indications regarding the performance one strategy can achieve over another and, as such, can be used by the ISP to improve the utilization of network resources. I. INTRODUCTIONContent Delivery Networks (CDNs) have been the prevalent method for the efficient delivery of rich content across the Internet. In order to meet the growing demand for content, CDN providers deploy massively distributed storage infrastructures that host content copies of contracting content providers and maintain business relationships with ISPs. Surrogate servers are strategically placed and connected to ISP network edges [1] so that content can be closer to clients, thus reducing both access latency and the consumption of network bandwidth for content delivery.Current content delivery services operated by large CDN providers like Akamai [2] and Limelight [3] can exert enormous strain on ISP networks [4]. This is mainly attributed to the fact that CDN providers control both the placement of content in surrogate servers spanning different geographic locations, as well as the decision on where to serve client requests from (i.e. server selection) [5]. These decisions are taken without knowledge of the precise network topology and state in terms of traffic load and may result in network performance degradation.In this work we propose a cache management approach with which ISPs can have more control over their network resources. Exploiting the decreasing cost of storage modules, our approach involves operating a limited capacity CDN service within ISP
Abstract-Although direct reciprocity (Tit-for-Tat) contribution systems have been successful in reducing freeloading in peerto-peer overlays, it has been shown that, unless the contribution network is dense, they tend to be slow (or may even fail) to converge [1]. On the other hand, current indirect reciprocity mechanisms based on reputation systems tend to be susceptible to sybil attacks, peer slander and whitewashing.In this paper we present PledgeRoute, an accounting mechanism for peer contributions that is based on social capital. This mechanism allows peers to contribute resources to one set of peers and use this contribution to obtain services from a different set of peers, at a different time. PledgeRoute is completely decentralised, can be implemented in both structured and unstructured peer-to-peer systems, and it is resistant to the three kinds of attacks mentioned above.To achieve this, we model contribution transitivity as a routing problem in the contribution network of the peer-to-peer overlay, and we present arguments for the routing behaviour and the sybilproofness of our contribution transfer procedures on this basis. Additionally, we present mechanisms for the seeding of the contribution network, and a combination of incentive mechanisms and reciprocation policies that motivate peers to adhere to the protocol and maximise their service contributions to the overlay.
When designing distributed systems and Internet protocols, designers can benefit from statistical models of the Internet that can be used to estimate their performance. However, it is frequently impossible for these models to include every property of interest. In these cases, model builders have to select a reduced subset of network properties, and the rest will have to be estimated from those available. In this paper we present a technique for the analysis of Internet round trip times (RTT) and its relationship with other geographic and network properties. This technique is applied on a novel dataset comprising ∼19 million RTT measurements derived from ∼200 million RTT samples between ∼54 thousand DNS servers. Our main contribution is an information-theoretical analysis that allows us to determine the amount of information that a given subset of geographic or network variables (such as RTT or great circle distance between geolocated hosts) gives about other variables of interest. We then provide bounds on the error that can be expected when using statistical estimators for the variables of interest based on subsets of other variables.
Abstract-Layered video streaming in peer-to-peer (P2P) networks has drawn great interest
Abstract-In this paper we present the Distributed Overlay Anycast Table, a structured overlay that implements application-layer anycast, allowing the discovery of the closest host that is a member of a given group. One application is in locality-aware peer-to-peer networks, where peers need to discover low-latency peers participating in the distribution of a particular file or stream. The DOAT makes use of network delay coordinates and a space filling curve to achieve locality-aware routing across the overlay, and Bloom filters to aggregate group identifiers. The solution is designed to optimise both accuracy and query time, which are essential for real-time applications. We simulated DOAT using both random and realistic node distributions. The results show that accuracy is high and query time is low.
Abstract-Many researchers have hypothesised models which explain the evolution of the topology of a target network. The framework described in this paper gives the likelihood that the target network arose from the hypothesised model. This allows rival hypothesised models to be compared for their ability to explain the target network. A null model (of random evolution) is proposed as a baseline for comparison. The framework also considers models made from linear combinations of model components. A method is given for the automatic optimisation of component weights. The framework is tested on simulated networks with known parameters and also on real data. I. INTRODUCTIONThe field of modelling graph topologies (and in particular the topology of the Internet) has generated a huge degree of research interest in recent years (see [1, chapter 3] for a review of the subject and [2] for an Internet topology perspective). This paper introduces FETA (Framework for Evolving Topology Analysis) which can be used to assess potential underlying models for any network where information about the network evolution is available. Previously, many researchers have fitted probabilistic topology models by growing candidate models and assessing how well their model fitted against a selection of statistics made on a snapshot of the real network. The FETA approach, by contrast, uses a single statistic to get a rigorous estimate for the likelihood of a model based upon the dynamic evolution of the network. This paper concentrates on results on artificial models proving the framework reproduces known models. A companion paper [11] reports on results from five real networks but does not present the artificial test data given here.It has been known for some time that a number of networks follow an approximate power law in their degree distribution. Such networks include the internet Autonomous System (AS) topology, world wide web, co-authorship networks, sexual contact networks, email, networks of actors, networks from biology and many others (many references are in [1, table 3.1]). Researchers have attempted to grow artificial versions of such networks with models which assign connection probabilities to existing nodes based upon the graph topology. Often surprisingly simple models replicate many features of real networks. The celebrated Barabási-Albert (BA) model [3] provides an explanation for such power laws in terms of a
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.