No abstract
Large content providers, known as hyper-giants, are responsible for sending the majority of the content trac to consumers. These hyper-giants operate highly distributed infrastructures to cope with the ever-increasing demand for online content. To achieve commercial-grade performance of Web applications, enhanced enduser experience, improved reliability, and scaled network capacity, hyper-giants are increasingly interconnecting with eyeball networks at multiple locations. This poses new challenges for both (1) the eyeball networks having to perform complex inbound trac engineering, and (2) hyper-giants having to map end-user requests to appropriate servers. We report on our multi-year experience in designing, building, rolling-out, and operating the rst-ever large scale system, the Flow Director, which enables automated cooperation between one of the largest eyeball networks and a leading hyper-giant. We use empirical data collected at the eyeball network to evaluate its impact over two years of operation. We nd very high compliance of the hyper-giant to the Flow Director's recommendations, resulting in (1) close to optimal user-server mapping, and (2) 15% reduction of the hyper-giant's trac overhead on the ISP's long-haul links, i.e., benets for both parties and end-users alike.
In March 2020, the World Health Organization declared the Corona Virus 2019 (COVID-19) outbreak a global pandemic. As a result, billions of people were either encouraged or forced by their governments to stay home to reduce the spread of the virus. This caused many to turn to the Internet for work, education, social interaction, and entertainment. With the Internet demand rising at an unprecedented rate, the question of whether the Internet could sustain this additional load emerged. To answer this question, this paper will review the impact of the first year of the COVID-19 pandemic on Internet traffic in order to analyze its performance. In order to keep our study broad, we collect and analyze Internet traffic data from multiple locations at the core and edge of the Internet. From this, we characterize how traffic and application demands change, to describe the "new normal," and explain how the Internet reacted during these unprecedented times.
The vision towards the Network of the Future cannot be separated from the fact that today's networks, and networking services are subject to sophisticated and very effective attacks. When these attacks first appeared, spoofing and distributed denial-of-service attacks were treated as apocalypse for networking. Now, they are considered moderate damage, whereas more sophisticated and inconspicuous attacks, such as botnets activities, might have greater and far reaching impact. As the Internet is expanding to mobile phones and smart dust and as its social coverage is liberalized towards the realization of ubiquitous computing (with communication), the concerns on security and privacy have become deeper and the problems more challenging than ever. Re-designing the Internet as the Network of the Future is self-motivating for researchers, and security and privacy cannot be provided again as separate, external, add-on, solutions. In this paper, we discuss the security and privacy challenges of the Network of the Future and try to delimit the solutions space on the basis of emerging techniques. We also review methods that help the quantification of security and privacy in an effort to provide a more systematic and quantitative treatment of the area in the future
Although traffic between Web servers and Web browsers is readily apparent to many knowledgeable end users, fewer are aware of the extent of server-to-server Web traffic carried over the public Internet. We refer to the former class of traffic as front-office Internet Web traffic and the latter as back-office Internet Web traffic (or just front-office and back-office traffic, for short). Back-office traffic, which may or may not be triggered by end-user activity, is essential for today's Web as it supports a number of popular but complex Web services including large-scale content delivery, social networking, indexing, searching, advertising, and proxy services. This paper takes a first look at back-office traffic, measuring it from various vantage points, including from within ISPs, IXPs, and CDNs. We describe techniques for identifying back-office traffic based on the roles that this traffic plays in the Web ecosystem. Our measurements show that back-office traffic accounts for a significant fraction not only of core Internet traffic, but also of Web transactions in the terms of requests and responses. Finally, we discuss the implications and opportunities that the presence of backoffice traffic presents for the evolution of the Internet ecosystem.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.