Abstract. Conducting extensive testing of anonymization techniques is critical to assess their robustness and identify the scenarios where they are most suitable. However, the access to real microdata is highly restricted and the one that is publicly-available is usually anonymized or aggregated; hence, reducing its value for testing purposes. In this paper, we present a framework (COCOA) for the generation of realistic synthetic microdata that allows to define multi-attribute relationships in order to preserve the functional dependencies of the data. We prove how COCOA is useful to strengthen the testing of anonymization techniques by broadening the number and diversity of the test scenarios. Results also show how COCOA is practical to generate large datasets.
Summary Nowadays, clustered environments are commonly used in high‐performance computing and enterprise‐level applications to achieve faster response time and higher throughput than single machine environments. Nevertheless, how to effectively manage the workloads in these clusters has become a new challenge. As a load balancer is typically used to distribute the workload among the cluster's nodes, multiple research efforts have concentrated on enhancing the capabilities of load balancers. Our previous work presented a novel adaptive load balancing strategy (TRINI) that improves the performance of a clustered Java system by avoiding the performance impacts of major garbage collection, which is an important cause of performance degradation in Java. The aim of this paper is to strengthen the validation of TRINI by extending its experimental evaluation in terms of generality, scalability and reliability. Our results have shown that TRINI can achieve significant performance improvements, as well as a consistent behaviour, when it is applied to a set of commonly used load balancing algorithms, demonstrating its generality. TRINI also proved to be scalable across different cluster sizes, as its performance improvements did not noticeably degrade when increasing the cluster size. Finally, TRINI exhibited reliable behaviour over extended time periods, introducing only a small overhead to the cluster in such conditions. These results offer practitioners a valuable reference regarding the benefits that a load balancing strategy, based on garbage collection, can bring to a clustered Java system. Copyright © 2016 John Wiley & Sons, Ltd.
The identification of performance issues and the diagnosis of their root causes are time-consuming and complex tasks, especially in clustered environments. To simplify these tasks, researchers have been developing tools with built-in expertise for practitioners. However, various limitations exist in these tools that prevent their efficient usage in the performance testing of clusters (e.g. the need of manually analysing huge volumes of distributed results). In a previous work, we introduced a policy-based adaptive framework (PHOEBE) that automates the usage of diagnosis tools in the performance testing of clustered systems, in order to improve a tester's productivity, by decreasing the effort and expertise needed to effectively use such tools. This paper extends that work by broadening the set of policies available in PHOEBE, as well as by performing a comprehensive assessment of PHOEBE in terms of its benefits, costs and generality (with respect to the used diagnosis tool). The performed evaluation involved a set of experiments in assessing the different trade-offs commonly experienced by a tester when using a performance diagnosis tool, as well as the time savings that PHOEBE can bring to the performance testing and analysis processes. Our results have shown that PHOEBE can drastically reduce the effort required by a tester to do performance testing and analysis in a cluster. PHOEBE also exhibited consistent behaviour (i.e. similar time-savings and resource utilisations), when applied to a set of commonly used diagnosis tools, demonstrating its generality. Finally, PHOEBE proved to be capable of simplifying the configuration of a diagnosis tool. This was achieved by addressing the identified trade-offs without the need for manual intervention from the tester. PHOEBE is implemented with the multi-agent architecture depicted in Figure 1. There, it can be seen how PHOEBE is composed of three types of agents: the control agent is responsible for interacting with the load testing tool to know when the test starts and ends. It is also responsible for evaluating the policies and propagating the decisions to the other nodes. Meanwhile, the application node agent is responsible for performing the required tasks in each application node (e.g. sampling collection or sending the collected samples to the diagnosis tool). Finally, the diagnosis tool Here, the objective was to evaluate the potential trade-off between the number of samples concurrently processed by a diagnosis tool and the number of resources it requires to process the samples. The following sections describe this experiment and its results. PHOEBE 1863 7.1. Experiment #4: proposed policies assessmentThe objective of this experiment was to evaluate the behaviour of PHOEBE, as well as the set of proposed policies, in order to assess how well they have fulfilled their purpose of addressing the identified trade-offs without the need for manual intervention from the tester. The following sections describe this experiment and its results.7.1.1. Experimental set-up....
Abstract-The advent of the Internet of Things (IoT) has led to a major change in the way we interact with increasingly ubiquitous connected devices such as smart objects and cyberphysical systems. It has also led to an exponential increase in the number of such Internet-connected devices over the last few years. Conducting extensive functional and performance testing is critical to assess the robustness and efficiency of IoT systems in order to validate them before their deployment in real life. However, creating an IoT test environment is a difficult and expensive task, usually requiring a significant amount of physical hardware and human effort to build it. This paper proposes a method to emulate an IoT environment using the Network Emulator for Mobile Universes (NEMU), itself built on the popular QEMU system emulator, in order to construct a testbed of inter-connected, emulated Raspberry Pi devices.Additionally, we experimentally demonstrate how our method can be successfully applied to IoT by showing how such an emulated environment can be used to detect anomalies in an IoT system.
Publisher ACM Link to online version https://icpe2018.spec.org/ Item record/more information http://hdl.handle.net/10197/9960Publisher's statement ABSTRACT Carrying out proper performance testing is considerably challenging. In particular, the identification of performance issues, as well as their root causes, is a time-consuming and complex process which typically requires several iterations of tests (as this type of issues can depend on the input workloads), and heavily relies on human expert knowledge. To improve this process, this paper presents an automated approach (that extends some of our previous work) to dynamically adapt the workload (used by a performance testing tool) during the test runs. As a result, the performance issues of the tested application can be revealed more quickly; hence, identifying them with less effort and expertise. Our experimental evaluation has assessed the accuracy of the proposed approach and the time savings that it brings to testers. The results have demonstrated the benefits of the approach by achieving a significant decrease in the time invested in performance testing (without compromising the accuracy of the test results), while introducing a low overhead in the testing environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.