Abstract-Recent reports in the popular media suggest a significant decrease in peer-to-peer (P2P) file-sharing traffic, attributed to the public's response to legal threats. Have we reached the end of the P2P revolution? In pursuit of legitimate data to verify this hypothesis, we embark on a more accurate measurement effort of P2P traffic at the link level. In contrast to previous efforts we introduce two novel elements in our methodology. First, we measure traffic of all known popular P2P protocols. Second, we go beyond the "known port" limitation by reverse engineering the protocols and identifying characteristic strings in the payload. We find that, if measured accurately, P2P traffic has never declined; indeed we have never seen the proportion of p2p traffic decrease over time (any change is an increase) in any of our data sources.
We present the concept of network traffic streams, and the ways they aggregate into flows through Internet links. We describe a method of measuring the size and lifetime of Internet streams, and use this method to characterise traffic distributions at two different sites. We find that although most streams (about 45% of them) are dragonflies, lasting less than 2 seconds, a significant number of streams have lifetimes of hours to days, and can carry a high proportion (50% to 60%) of the total bytes on a given link. We define tortoises as streams that last longer than 15 minutes. We point out that streams can be classified not only by lifetime (dragonflies and tortoises) but also by size (mice and elephants), and note that stream size and lifetime are independent dimensions. We submit that Service Providers (ISPs) need to be aware of the distribution of Internet stream sizes, and the impact of the difference in behaviour between short and long streams. In particular any forwarding cache mechanisms in Internet routers must be able to cope with a high volume of short streams. In addition ISPs should realise that Long-Running (LR) streams can contribute a significant fraction of their packet and byte volumessomething they may not have allowed for when using traditional 'flat rate user bandwidth consumption' approaches to provisioning and engineering. I. BACKGROUND A. Measuring Internet traffic
Status of this Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited.
The Domain Name System (DNS) domain names to be used in network transactions (email, web requests, etc.) instead of IP addresses. The root of the DNS distributed database is managed by 13 root nameservers. We passively measure the performance of one of them: F.root-servers.net.These measurements show an astounding number of bogus queries: from 60-85% of observed queries were repeated from the same host within the measurement interval. Over 14% of a root server's query load is due to queries that violate the DNS specification. Denial of service attacks using root servers are common and occurred throughout our measurement period (7-24 Jan 2001). Though not targeted at the root servers, DOS attacks often use root servers as reflectors toward a victim network. We contrast our observations with those found in an earlier study of DNS root server performance by Danzig et. al. [1].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.