GPGPUs (General Purpose Graphic Processing Units) provide massive computational power. However, applying GPGPU technology to real-time computing is challenging due to the non-preemptive nature of GPGPUs. Especially, a job running in a GPGPU or a data copy between a GPGPU and CPU is non-preemptive. As a result, a high priority job arriving in the middle of a low priority job execution or memory copy suffers from priority inversion. To address the problem, we present a new lightweight approach to supporting preemptive memory copies and job executions in GPG-PUs. Moreover, in our approach, a GPGPU job and memory copy between a GPGPU and the hosting CPU are run concurrently to enhance the responsiveness. To show the feasibility of our approach, we have implemented a prototype system for preemptive job executions and data copies in a GPGPU. The experimental results show that our approach can bound the response times in a reliable manner. In addition, the response time of our approach is significantly shorter than those of the unmodified GPGPU runtime system that supports no preemption and an advanced GPGPU model designed to support prioritization and performance isolation via preemptive data copies.
Due to the relatively high node density and sourceto-sink communication pattern, wireless sensor networks (WSNs) are subject to congestion and packet losses. Further, the availability of low-cost hardware, such as Cyclops cameras, is promoting wireless multimedia sensing to support, for example, visual surveillance. As a result, congestion control is becoming more critical in WSNs. In this paper, we present a lightweight distributed congestion control method in WSNs. We develop new metrics to detect congestion in each node by considering the queue lengths and channel conditions observed in the onehop neighborhood. Based on the estimated level of congestion, each node dynamically adapts its packet transmission rate and balance the load among the one-hop neighbors to avoid creating congestion and bottleneck nodes. In a simulation study performed in OMNeT++, our approach significantly enhances the end-toend (e2e) packet delivery ratio and reduces the e2e delay without increasing the total energy consumption compared to the tested baseline approach.
In this paper, we present a new MapReduce framework, called Grex, designed to leverage general purpose graphics processing units (GPUs) for parallel data processing. Grex provides several new features. First, it supports a parallel split method to tokenize input data of variable sizes, such as words in e-books or URLs in web documents, in parallel using GPU threads. Second, Grex evenly distributes data to map/reduce tasks to avoid data partitioning skews. In addition, Grex provides a new memory management scheme to enhance the performance by exploiting the GPU memory hierarchy. Notably, all these capabilities are supported via careful system design without requiring any locks or atomic operations for thread synchronization. The experimental results show that our system is up to 12.4x and 4.1x faster than two state-of-the-art GPU-based MapReduce frameworks for the tested applications.
There is a variety of knapsack problems in the literature. Multidimensional 0-1 Knapsack Problem (MKP) is an NP-hard combinatorial optimization problem having many application areas. Many approaches have been proposed for solving this problem. In this paper, an empirical investigation of memetic algorithms (MAs) that hybridize genetic algorithms (GAs) with hill climbing for solving MKPs is provided. Two distinct sets of experiments are performed.During the initial experiments, MA parameters are tuned. GA and four MAs each using a different hill climbing method based on the same configuration are evaluated. In the second set of experiments, a self-adaptive (co-evolving) multimeme memetic algorithm (MMA) is compared to the best MA from the parameter tuning experiments. MMA utilizes the evolutionary process as a learning mechanism for choosing the appropriate hill climbing method to improve a candidate solution at a given time. Two well-known MKP benchmarks are used during the experiments.
Research on smart environments saturated with ubiquitous computing devices is rapidly advancing while raising serious privacy issues. According to recent studies, privacy concerns significantly hinder widespread adoption of smart home technologies. Previous work has shown that it is possible to infer the activities of daily living within environments equipped with wireless sensors by monitoring radio fingerprints and traffic patterns. Since data encryption cannot prevent privacy invasions exploiting transmission pattern analysis and statistical inference, various methods based on fake data generation for concealing traffic patterns have been studied. In this paper, we describe an energy-efficient, light-weight, low-latency algorithm for creating dummy activities that are semantically similar to the observed phenomena. By using these cloaking activities, the amount of fake data transmissions can be flexibly controlled to support a trade-off between energy efficiency and privacy protection. According to the experiments using real data collected from a smart home environment, our proposed method can extend the lifetime of the network by more than 2× compared to the previous methods in the literature. Furthermore, the activity cloaking method supports low latency transmission of real data while also significantly reducing the accuracy of the wireless snooping attacks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.