Abstract-The scalability of modern data centers has become a practical concern and has attracted significant attention in recent years. In contrast to existing solutions that require changes in the network architecture and the routing protocols, this paper proposes using traffic-aware virtual machine (VM) placement to improve the network scalability. By optimizing the placement of VMs on host machines, traffic patterns among VMs can be better aligned with the communication distance between them, e.g. VMs with large mutual bandwidth usage are assigned to host machines in close proximity. We formulate the VM placement as an optimization problem and prove its hardness. We design a two-tier approximate algorithm that efficiently solves the VM placement problem for very large problem sizes. Given the significant difference in the traffic patterns seen in current data centers and the structural differences of the recently proposed data center architectures, we further conduct a comparative analysis on the impact of the traffic patterns and the network architectures on the potential performance gain of traffic-aware VM placement. We use traffic traces collected from production data centers to evaluate our proposed VM placement algorithm, and we show a significant performance improvement compared to existing generic methods that do not take advantage of traffic patterns and data center network characteristics. I. INTRODUCTIONModern virtualization based data centers are becoming the hosting platform for a wide spectrum of composite applications. With an increasing trend towards more communication intensive applications in data centers, the bandwidth usage between virtual machines (VMs) is rapidly growing. This raises a number of concerns with respect to the scalability of the underlying network architecture, an issue that has attracted significant attention recently. Techniques in these proposals include rich connectivity at the edge of the network and dynamic routing protocols to balance traffic load.In this paper, we tackle the scalability issue from a different perspective, by optimizing the placement of VMs on host machines. Normally VM placement is decided by various capacity planning tools such as VMware Capacity Planner [8], IBM WebSphere CloudBurst [9], Novell PlateSpin Recon [10] and Lanamark Suite [11]. These tools seek to consolidate VMs for CPU, physical memory and power consumption savings, yet without considering consumption of network resources. As a result, this can lead to situations in which VM pairs with heavy traffic among them are placed on host machines with large network cost between them. To understand how often this happens in practice, we conducted a measurement study in operational data centers and observed three apparent trends:
The wide adoption of non-executable page protections in recent versions of popular operating systems has given rise to attacks that employ return-oriented programming (ROP) to achieve arbitrary code execution without the injection of any code. Existing defenses against ROP exploits either require source code or symbolic debugging information, or impose a significant runtime overhead, which limits their applicability for the protection of third-party applications. In this paper we present in-place code randomization, a practical mitigation technique against ROP attacks that can be applied directly on third-party software. Our method uses various narrow-scope code transformations that can be applied statically, without changing the location of basic blocks, allowing the safe randomization of stripped binaries even with partial disassembly coverage. These transformations effectively eliminate about 10%, and probabilistically break about 80% of the useful instruction sequences found in a large set of PE files. Since no additional code is inserted, in-place code randomization does not incur any measurable runtime overhead, enabling it to be easily used in tandem with existing exploit mitigations such as address space layout randomization. Our evaluation using publicly available ROP exploits and two ROP code generation toolkits demonstrates that our technique prevents the exploitation of the tested vulnerable Windows 7 applications, including Adobe Reader, as well as the automated construction of alternative ROP payloads that aim to circumvent in-place code randomization using solely any remaining unaffected instruction sequences.
We compare, using data envelopment analysis (DEA) and meta-frontier analysis (MFA), the performance of Islamic and conventional banks during the period [2004][2005][2006][2007][2008][2009]. The use of nonparametric MFA is new to the Islamic banking context. Our DEA finds no significant difference in mean efficiency between conventional and Islamic banks when efficiency is measured relative to a common frontier. The MFA however, reveals some fundamental differences between the two bank types. In particular, the modus operandi in Islamic banking appears to be less efficient on average than the conventional one. Managers of Islamic banks, however, make up for this as mean efficiency in Islamic banks is higher than in conventional banks when efficiency is measured relative to their own bank type frontier. A second-stage analysis shows that differences between the two banking systems remain even after banking environment and bank-level characteristics have been taken into account. These findings are relevant to both policy-makers and regulators. In particular, Islamic banks should explore the benefits of moving to a more standardized system of banking, while the underperformance of conventional bank managers could be examined in the context of the on-going remuneration culture.
By comparing the failure risk for both bank types, we find that Islamic banks have a significantly lower risk of failure than that of their conventional peers. This lower risk is based both unconditionally and conditionally on bank-specific (microeconomic) variables as well as macroeconomic and market structure variables. Our findings indicate that the design and implementation of early warning systems for bank failure should recognize the distinct risk profiles of the two bank types.
Abstract-Query privacy in secure DBMS is an important feature, although rarely formally considered outside the theoretical community. Because of the high overheads of guaranteeing privacy in complex queries, almost all previous works addressing practical applications consider limited queries (e.g., just keyword search), or provide a weak guarantee of privacy.In this work, we address a major open problem in private DB: efficient sublinear search for arbitrary Boolean queries. We consider scalable DBMS with provable security for all parties, including protection of the data from both server (who stores encrypted data) and client (who searches it), as well as protection of the query, and access control for the query.We design, build, and evaluate the performance of a rich DBMS system, suitable for real-world deployment on today medium-to large-scale DBs. On a modern server, we are able to query a formula over 10TB, 100M-record DB, with 70 searchable index terms per DB row, in time comparable to (insecure) MySQL (many practical queries can be privately executed with work 1.2-3 times slower than MySQL, although some queries are costlier).We support a rich query set, including searching on arbitrary boolean formulas on keywords and ranges, support for stemming, and free keyword searches over text fields.We identify and permit a reasonable and controlled amount of leakage, proving that no further leakage is possible. In particular, we allow leakage of some search pattern information, but protect the query and data, provide a high level of privacy for individual terms in the executed search formula, and hide the difference between a query that returned no results and a query that returned a very small result set. We also support private and complex access policies, integrated in the search process so that a query with empty result set and a query that fails the policy are hard to tell apart.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.