OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible. This is an author-deposited version published in :
International audienceIn recent years, research has been conducted in the area of large systems models, especially distributed systems, to analyze and understand their behavior. Simulators are now commonly used in this area and are becoming more complex. Most of them provide frameworks for simulating application scheduling in various Grid infrastructures, others are specifically developed for modeling networks, but only a few of them simulate energy-efficient algorithms. This article describes which tools need to be implemented in a simulator in order to support energy-aware experimentation. The emphasis is on DVFS simulation, from its implementation in the simulator CloudSim to the whole methodology adopted to validate its functioning. In addition, a scientific application is used as a use case in both experiments and simulations, where the close relationship between DVFS efficiency and hardware architecture is highlighted. A second use case using Cloud applications represented by DAGs, which is also a new functionality of CloudSim, demonstrates that the DVFS efficiency also depends on the intrinsic middleware behavior
In moving toward an interoperability architecture, the concept of network centric is a step in the right direction-all modules connect to the network, not to each other. And a handful of good network citizenship rules provide a syntactical guide for attachment. From the point of view of the network designer this is sufficient-we have enough to build internetworks for the common good. The continued burgeoning of the Internet constitutes an existence proof. But a common networking base is insufficient to reach a goal of cross-system interoperability-the large information system. Many standardization efforts have attempted to solve this problem, but appear to have lacked the necessary scope. For instance, there have been many efforts aimed at standardizing data elements; these efforts, if followed through, yield some gains, but never seem to quite reach the interoperability goal. If we are to truly erect an interoperability architecture, we need to broaden the scope. This problem of cross-program, cross-service and cross-ally interoperability requires that we agree on the what of modularization, not just the how. This paper is aimed at framing the interoperability architecture problem. On modularization The core of architecture-the way things fit together-is a sense of modularization. This is the part of the problem that is perhaps the least mechanical and requires judgment. Experience, no doubt, helps. Architectural conformity must be traded off against other desired characteristics. The objective is that modules become inherently interoperable so we have components delivered by multiple programs that can be assembled for particular tasks. Prerequisite-network centric.
Solution and interfacial properties of water-soluble hybrid linear−dendritic polyether
copolymers are investigated by static and dynamic surface tension measurements and
adsorption experiments on polymeric substrates. The results obtained show that the block
copolymers are able to form mono- and multimolecular aggregates in water. Contacting a
solid polymeric substrate with an aqueous solution of hybrid block copolymer increases the
hydrophilicity of the substrate. Adsorption on the hydrophobic surface of poly(ethylene
terephthalate) proceeds only through the dendritic blocks of the hybrid macromolecule. For
more hydrophilic substrates such as regenerated cellulose, both the poly(ethylene glycol)
tail and the poly(benzyl ether) dendrons adsorb on the surface, increasing its hydrophilicity.
Abstract-The European Telecommunications StandardsInstitute (ETSI) released a set of specifications to define a restful architecture for enabling seamless service provisioning across heterogeneous Machine-to-Machine (M2M) systems. The current version of this architecture is strongly centralized, thus requiring new enhancements to its scalability, fault tolerance, and flexibility. To bridge this gap, herein it is presented an Overlay Service Capability Layer, based on Information Centric Networking design. Key features, example use cases and preliminary performance assessments are also discussed to highlight the potential of our approach.
Abstract-Information-Centric Networking (ICN) is an emerging network paradigm based on name-identified data objects and in-network caching. Therefore, ICN contents are distributed in a scalable and cost-efficient manner. With the rapid growth of IoT traffic, ICN is intended to be a suitable architecture to support IoT networks. In fact, ICN provides unique persistent naming, in-network caching and multicast communications which reduce the data producer load and the response latency. Using ICN in an IoT environment requires a study of caching policies in terms of cache placement strategies and cache replacement policies. To this end, we address, in this paper, caching challenges with the aim to identify which caching policies are suitable for IoT networks. Simulation findings show that the combination of the consumer-cache caching strategy and the RR cache replacement policy is the most convenient in IoT environments in terms of hop reduction ratio, server hit reduction and response latency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.