New technologies as well as new ways of using network services are rapidly changing the Internet's landscape. These developments will have far-reaching implications for the architecture of the networks of the future. However, the current Internet design is plagued with a number of fundamental limitations, which makes its use as the sole basis for the networking applications of the future questionable. We believe that the Future Internet must allow the co-existence of diverse network designs and paradigms, both new and old, to remain open to innovation and meet the challenges of the future. In this paper, we propose to use network virtualization, embedded in an architectural framework, to achieve this goal and to lay the foundation for the deployment of novel concepts such as content-centric networking.
One possible key technology for the Future Internet is network virtualization. It allows to run numerous virtual networks in parallel, each of which can be adapted towards different requirements, intended use, or applications used. When consequently using network virtualization, it allows not only to have very specialized networks but also allows to run new protocols and services in different networks. This can give opportunities for rapid service deployment, especially for services based on new protocols.Currently a lot of research is concerned with network virtualization or related aspects like management or signaling of network virtualization. This paper however is different, since it looks on network virtualization from another angle. We describe our Node Architecture for the Future Internet, which uses network virtualization as a fundamental concept. It has the goal to give users access to a vast number of virtual networks and exploit the possibilities of network virtualization.
TCP is suboptimal in heterogeneous wired/wireless networks because it reacts in the same way to losses due to congestion and losses due to link errors. In this paper, we propose to improve TCP performance in wired/wireless networks by endowing it with a classifier that can distinguish packet loss causes. In contrast to other proposals we do not change TCP's congestion control nor TCP's error recovery. A packet loss whose cause is classified as link error will simply be ignored by TCP's congestion control and recovered as usual, while a packet loss classified as congestion loss will trigger both mechanisms as usual. To build our classification algorithm, a database of pre-classified losses is gathered by simulating a large set of random network conditions, and classification models are automatically built from this database by using supervised learning methods. Several learning algorithms are compared for this task. Our simulations of different scenarios show that adding such a classifier to TCP can improve the throughput of TCP substantially in wired/wireless networks without compromizing TCP-friendliness in both wired and wireless environments.
We first study the accuracy of two well-known analytical models of the average throughput of long-term TCP flows, namely the so-called SQRT and PFTK models, and show that these models are far from being accurate in general. Our simulations, based on a large set of long-term TCP sessions, show that 70% of their predictions exceed the boundaries of TCP-Friendliness, thus questioning their use in the design of new TCP-Friendly transport protocols. We then investigate the reasons of this inaccuracy, and show that it is largely due to the lack of discrimination between the two packet loss detection methods used by TCP, namely by triple duplicate acknowledgments or by timeout expirations. We then apply various machine learning techniques to infer new models of the average TCP throughput. We show that they are more accurate than the SQRT and PFTK models, even without the above discrimination, and are further improved when we allow the machine-learnt models to distinguish the two loss detection techniques. Although our models are not analytical formulas, they can be plugged in transport protocols to make them TCP friendly. Our results also suggest that analytical models of the TCP throughput should certainly benefit from the incorporation of the timeout loss rate.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.