Abstract-Various RF based location determination systems have been proposed that use received signal strength fingerprints to identify locations. We implemented a Bayesian method [8] for location determination in a WLAN testbed and were able to get about 80% accuracy of estimation with a precision of 2.5 meters. We proposed two mechanisms to improve this accuracy: 1)Kalman filtering to remove noise in received signal strength readings and 2) a technique which uses estimates from multiple observers to determine the location. Results from an IEEE 802.11b based implementation of the first method shows that Kalman filtering during the training phase can increase this accuracy to 90%. The multiple observer technique that uses received signal strength readings of the mobile device at the access point, also shows a similar increase in accuracy. Since the multiple observer technique requires more time and resources, we conclude that Kalman filtering is a more efficient and simple way to increase the accuracy of location determination.
Over the last few years, the World Wide Web has transformed itself from a static content-distribution medium to an interactive, dynamic medium. The Web is now widely used as the presentation layer for a host of on-line services such as e-mail and address books, e-cards, ecalendar, shopping, banking, and stock trading. As a consequence (HyperText Markup Language)HTML files are now typically generated dynamically after the server receives the request. From the Web-site providers' point of view, dynamic generation of HTML pages implies a lesser understanding of the real capacity and performance of their Web servers. From the Web developers' point of view, dynamic content implies an additional technology decision: the Web programming technology to be employed in creating a Web-based service. Since the Web is inherently interactive, performance is a key requirement, and often demands careful analysis of the systems. In this paper, we compare four dynamic Web programming technologies from the point of view of performance. The comparison is based on testing and measurement of two cases: one is a case study of a real application that was deployed in an actual Web-based service; the other is a trivial application. The two cases provide us with an opportunity to compare the performance of these technologies at two ends of the spectrum in terms of complexity. Our focus in this paper is on how complex vs. simple applications perform when implemented using different Web programming technologies. The paper draws comparisons and insights based on this development and performance measurement effort.
This chapter addresses the issue of determining the response time distribution in networks of queues. Four different techniques are described and demonstrated. A two-step numerical approach to compute the response time distribution for closed Markovian networks with general connectivity, a technique for determining the approximate (exact under certain conditions) response time distribution of a time Markov chain (CTMC) "response time blocks," an expansion of "response time blocks" to open Markovian networks with general phase-type (PH) service time distributions, and an approach for handling non-Markovian networks having M/G/1 priority and PH/G/1 queues. These techniques are shown to give accurate results with much smaller CTMCs or semi-Markov processes than exact analysis.
Abstract-The performance of the protocol stack implementation of an operating system can greatly impact the performance of networked applications that run on it. In this paper, we present a thorough measurement study and comparison of the network stack performance of the two popular Linux kernels: 2.4 and 2.6, with a special focus on their performance on SMP architectures. Our findings reveal that interrupt processing costs, device driver overheads, checksumming and buffer copying are dominant overheads of protocol processing. We find that although raw CPU costs are not very different between the two kernels, Linux 2.6 shows vastly improved scalability, attributed to better scheduling and kernel locking mechanisms. We also uncover an anomalous behaviour in which Linux 2.6 performance degrades when packet processing for a single connection is distributed over multiple processors. This, however, verifies the superiority of the "processor per connection" model for parallel processing.
Sizing of IEEE 802.11 wireless LANs (WLANs), defined as the problem of finding the maximum number of users that can be supported, is essential for efficient application performance over WLANs. The usage of existing analytical models of 802.11 MAC in sizing requires a mapping from application load and performance to link-layer load and performance respectively, which we propose in this paper. We first evaluate analytical models of 802.11 MAC from the sizing perspective and then propose an approximate sizing method. We illustrate our method through an HTTP application and validate it through extensive ns-2 simulations which show that the number of users suggested by our tool are within 13% of those derived from simulations for a majority of the test cases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.