There have been several recent proposals for content-oriented network architectures whose underlying mechanisms are surprisingly similar in spirit, but which differ in many details. In this paper we step back from the mechanistic details and focus only on the area where the these approaches have a fundamental difference: naming. In particular, some designs adopt a hierarchical, humanreadable names, whereas others use self-certifying names. When discussing a network architecture, three of the most important requirements are security, scalability, and flexibility. In this paper we examine the two different naming approaches in terms of these three basic goals.
No abstract
Determining an appropriate sending rate when beginning data transmission into a network with unknown characteristics is a fundamental issue in best-effort networks. Traditionally, the slow-start algorithm has been used to probe the network path for an appropriate sending rate. This paper provides an initial exploration of the efficacy of an alternate scheme called Quick-Start, which is designed to allow transport protocols to explicitly request permission from the routers along a network path to send at a higher rate than allowed by slow-start. Routers may approve, reject or reduce a sender's requested rate. Quick-Start is not a general purpose congestion control mechanism, but rather an anti-congestion control scheme; Quick-Start does not detect or respond to congestion, but instead, when successful, gets permission to send at a high sending rate on an underutilized path. Before deploying Quick-Start there are many questions that need to be answered. However, before tackling all the thorny engineering questions we need to understand whether Quick-Start provides enough benefit to even bother. Therefore, our goal in this paper is to start the process of determining the efficacy of Quick-Start, while also highlighting some of the issues that will need to be addressed to realize a working Quick-Start system.
Many residential and small business users connect to the Internet via home gateways, such as DSL and cable modems. The characteristics of these devices heavily influence the quality and performance of the Internet service that these users receive. Anecdotal evidence suggests that an extremely diverse set of behaviors exists in the deployed base, forcing application developers to design for the lowest common denominator. This paper experimentally analyzes some characteristics of a substantial number of different home gateways: binding timeouts, queuing delays, throughput, protocol support and others.
The purpose of this document is to move the F-RTO (Forward RTO-Recovery) functionality for TCP in RFC 4138 from Experimental to Standards Track status. The F-RTO support for Stream Control Transmission Protocol (SCTP) in RFC 4138 remains with Experimental status. See Appendix B for the differences between this document and RFC 4138. Spurious retransmission timeouts cause suboptimal TCP performance because they often result in unnecessary retransmission of the last window of data. This document describes the F-RTO detection algorithm for detecting spurious TCP retransmission timeouts. F-RTO is a TCP sender-only algorithm that does not require any TCP options to operate. After retransmitting the first unacknowledged segment triggered by a timeout, the F-RTO algorithm of the TCP sender monitors the incoming acknowledgments to determine whether the timeout was spurious. It then decides whether to send new segments or retransmit unacknowledged segments. The algorithm effectively helps to avoid additional unnecessary retransmissions and thereby improves TCP performance in the case of a spurious timeout. Status of This Memo This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "Internet Official Protocol Standards" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited.
Vertical hand-offs between different wireless access technologies have become more relevant after the recent introduction of multi-access mobile terminals with Wireless LAN (WLAN) and Wireless WAN (WWAN) technologies. While the IP mobility mechanisms are rather well known, the performance of TCP still has problems when moving between WLAN and WWAN accesses. First, with a high-latency WWAN link technology such as GPRS it takes several seconds before the TCP congestion window has reached the path capacity. Second, when the notification of the first packet loss arrives at the TCP sender, several packets have already been lost due to the slow-start overshoot and the TCP sender needs to retransmit a large number of the packets from the last transmission window. Third, after a vertical hand-off the path characteristics might have changed dramatically in which case the TCP congestion control state is not valid anymore. In this paper we investigate Quick-Start, a mechanism for avoiding the initial slow-start delay, in the context of wireless multi-access terminals. We also propose an enhancement to Quick-Start to alleviate the effects of slow-start overshoot and apply Quick-Start after a vertical hand-off to quickly learn the available capacity on the new end-to-end path. An explicit cross-layer hand-off notification is employed to trigger Quick-Start when the hand-off completes. We conduct simulations with different hand-off models, and our simulations yield promising results with Quick-Start.
This paper presents Webget, a measurement tool that measures web Quality of Service (QoS) metrics including the DNS lookup time, time to first byte (TTFB) and the download time. Webget also captures web complexity metrics such as the number and the size of objects that make up the website. We deploy the Webget test to measure the web performance of Google, YouTube, and Facebook from 182 SamKnows probes. Using a 3.5year-long (Jan 2014-Jul 2017) dataset, we show that the DNS lookup time of these popular Content Delivery Networks (CDNs) and the download time of Google have improved over time. We also show that the TTFB towards Facebook exhibits worse performance than the Google CDN. Moreover, we show that the number and the size of objects are not the only factors that affect the web download time. We observe that these webpages perform differently across regions and service providers. We also developed a web measurement system, WePR (Web Performance and Rendering) that measures the same web QoS and complexity metrics as Webget, but it also captures the web Quality of Experience (QoE) metrics such as rendering time. WePR has a distributed architecture where the component that measures the web QoS and complexity metrics is deployed on the SamKnows probe, while the rendering time is calculated on a central server. We measured the rendering performance of four websites. We show that in 80% of the cases, the rendering time of the websites is faster than the downloading time. The source code of the WePR system and the dataset is made publicly available.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.