Rate adaptation is a mechanism unspecified by the 802.11 standards, yet critical to the system performance by exploiting the multi-rate capability at the physical layer. In this paper, we conduct a systematic and experimental study on rate adaptation over 802.11 wireless networks. Our main contributions are two-fold. First, we critique five design guidelines adopted by most existing algorithms. Our study reveals that these seemingly correct guidelines can be misleading in practice, thus incur significant performance penalty in certain scenarios. The fundamental challenge is that rate adaptation must accurately estimate the channel condition despite the presence of various dynamics caused by fading, mobility and hidden terminals. Second, we design and implement a new Robust Rate Adaptation Algorithm (RRAA) that addresses the above challenge. RRAA uses short-term loss ratio to opportunistically guide its rate change decisions, and an adaptive RTS filter to prevent collision losses from triggering rate decrease. Our extensive experiments have shown that RRAA outperforms three well-known rate adaptation solutions (ARF, AARF, and SampleRate) in all tested scenarios, with throughput improvement up to 143%.
Many microscopic investigations of materials may benefit from the recording of multiple successive images. This can include techniques common to several types of microscopy such as frame averaging to improve signal-to-noise ratios (SNR) or time series to study dynamic processes or more specific applications. In the scanning transmission electron microscope, this might include focal series for optical sectioning or aberration measurement, beam damage studies or camera-length series to study the effects of strain; whilst in the scanning tunnelling microscope, this might include biasvoltage series to probe local electronic structure. Whatever the application, such investigations must begin with the careful alignment of these data stacks, an operation that is not always trivial. In addition, the presence of low-frequency scanning distortions can introduce intra-image shifts to the data. Here, we describe an improved automated method of performing non-rigid registration customised for the challenges unique to scanned microscope data specifically addressing the issues of low-SNR data, images containing a large proportion of crystalline material and/or local features of interest such as dislocations or edges. Careful attention has been paid to artefact testing of the non-rigid registration method used, and the importance of this registration for the quantitative interpretation of feature intensities and positions is evaluated.
The use of high-resolution imaging methods in scanning transmission electron microscopy (STEM) is limited in many cases by the sensitivity of the sample to the beam and the onset of electron beam damage (for example, in the study of organic systems, in tomography and during in situ experiments). To demonstrate that alternative strategies for image acquisition can help alleviate this beam damage issue, here we apply compressive sensing via Bayesian dictionary learning to high-resolution STEM images. These computational algorithms have been applied to a set of images with a reduced number of sampled pixels in the image. For a reduction in the number of pixels down to 5% of the original image, the algorithms can recover the original image from the reduced data set. We show that this approach is valid for both atomic-resolution images and nanometer-resolution studies, such as those that might be used in tomography datasets, by applying the method to images of strontium titanate and zeolites. As STEM images are acquired pixel by pixel while the beam is scanned over the surface of the sample, these postacquisition manipulations of the images can, in principle, be directly implemented as a low-dose acquisition method with no change in the electron optics or the alignment of the microscope itself.
Protecting the network layer in a mobile ad hoc network is an important research topic in wireless security. This paper describes our unified network-layer security solution in ad hoc networks, which protects both routing and packet forwarding functionalities in the context of the AODV protocol. To address the unique characteristics of ad hoc networks, we take a self-organized approach by exploiting full localized design, without assuming any a priori trust or secret association between nodes. In our design, each node has a token in order to participate in the network operations, and its local neighbors collaboratively monitor it to detect any misbehavior in routing or packet forwarding services. Upon expiration of the token, each node renews its token via its multiple neighbors. The period of the validity of a node's token is dependent on how long it has stayed and behaved well in the network. A well-behaving node accumulates its credit and renews its token less and less frequently as time evolves. In essence, our security solution exploits collaboration among local nodes to protect the network layer without completely trusting any individual node.
We consider the problem of evaluating multiple overlapping queries defined on data streams, where each query is a conjunction of multiple filters and each filter may be shared across multiple queries. Efficient support for overlapping queries is a critical issue in the emerging data stream systems, and this is particularly the case when filters are expensive in terms of their computational complexity and processing time. This problem generalizes other well-known problems such as pipelined filter ordering and set cover, and is not only NP-Hard but also hard to approximate within a factor of o(log n) from the optimum, where n is the number of queries. In this paper, we present two near-optimal approximation algorithms with provably-good performance guarantees for the evaluation of overlapping queries. We present an edge-coverage based Greedy algorithm which achieves an approximation ratio of (1 + log(n) + log(α)), where n is the number of queries and α is the average number of filters in a query. We also present a randomized, fast and easily parallelizable Harmonic algorithm which achieves an approximation ratio of 2β, where β is the maximum number of filters in a query. We have implemented these algorithms in a prototype system, and evaluated their performance using extensive experiments in the context of multimedia stream analysis. The results show that our Greedy algorithm consistently outperforms other known algorithms under various settings and scales well as the numbers of queries and filters increase.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.