It may not be feasible for sensor networks monitoring nature and inaccessible geographical regions to include powered sinks with Internet connections. We consider the scenario where sinks are not present in large-scale sensor networks, and unreliable sensors have to collectively resort to storing sensed data over time on themselves. At a time of convenience, such cached data from a small subset of live sensors may be collected by a centralized (possibly mobile) collector. In this paper, we propose a decentralized algorithm using fountain codes to guarantee the persistence and reliability of cached data on unreliable sensors. With fountain codes, the collector is able to recover all data as long as a sufficient number of sensors are alive.We use random walks to disseminate data from a sensor to a random subset of sensors in the network. Our algorithms take advantage of the low decoding complexity of fountain codes, as well as the scalability of the dissemination process via random walks. We have proposed two algorithms based on random walks. Our theoretical analysis and simulation-based studies have shown that, the first algorithm maintains the same level of fault tolerance as the original centralized fountain code, while introducing lower overhead than naive random-walk based implementation in the dissemination process. Our second algorithm has lower level of fault tolerance than the original centralized fountain code, but consumes much lower dissemination cost.
Opportunistic routing significantly increases unicast throughput in wireless mesh networks by effectively utilizing the wireless broadcast medium. With network coding, opportunistic routing can be implemented in a simple and practical way without resorting to a complicated scheduling protocol. Traditionally, due to the constraints of computational complexity, a protocol utilizing network coding needs to partition the data into multiple segments and encode only packets in the same segment. However, it is extremely challenging to decide the optimal time to move to the transmissions of the next segment, and existing designs all resort to different heuristic ideas that might harm network throughput. To address this problem, we propose SlideOR, a new protocol to encode source packets in overlapping sliding windows such that coded packets from one window position may be useful towards decoding the source packets inside another window position. Through extensive simulations, we show that SlideOR outperforms the existing solutions and is amenable to much simpler implementation than solutions with complicated scheduling among multiple segments.
Epidemic routing has been proposed to reduce the data transmission delay in opportunistic networks, in which data can be either replicated or network coded along the opportunistic multiple paths. In this paper, we introduce an analytical framework to study the performance of network coding based epidemic routing, in comparison with replication based epidemic routing. With extensive simulations, we show that our model successfully characterizes these two protocols and demonstrates the superiority of network coding in opportunistic networks when bandwidth and node buffers are limited. We then propose a priority variant of the network coding based protocol, which has the salient feature that the destination can decode a high priority subset of the data much earlier than it can decode any data without the priority scheme. Our analytical results provide insights into how network coding based epidemic routing with priority can reduce the data transmission delay while inducing low overhead.
Both peer-to-peer and sensor networks have the fundamental characteristics of node churn and failures. Peers in P2P networks are highly dynamic, whereas sensors are not dependable. As such, maintaining the persistence of periodically measured data in a scalable fashion has become a critical challenge in such systems, without the use of centralized servers. To better cope with node dynamics and failures, we propose priority random linear codes, as well as their affiliated pre-distribution protocols, to maintain measurement data in different priorities, such that critical data have a higher opportunity to survive node failures than data of less importance. A salient feature of priority random linear codes is the ability to partially recover more important subsets of the original data with higher priorities, when it is not feasible to recover all of them due to node dynamics. We present extensive analytical and experimental results to show the effectiveness of priority random linear codes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.