2014
DOI: 10.1155/2014/974836
|View full text |Cite
|
Sign up to set email alerts
|

Fast Parallel Implementation for Random Network Coding on Embedded Sensor Nodes

Abstract: Network coding is becoming essential part of network systems since it enhances system performance in various ways. To take full advantage of network coding, however, it is vital to guarantee low latency in the decoding process and thus parallelization of random network coding has drawn broad attention from the network coding community. In this paper, we investigate the problem of parallelizing random network coding for embedded sensor systems with multicore processors. Recently, general purpose graphics proces… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 19 publications
0
9
0
Order By: Relevance
“…We note for completeness that a custom very large scale integration (VLSI) design for network coding has been studied in [51]. Computational strategies for GF (2 8 ) network coding on general-purpose multicore CPUs have mainly focused on judiciously partitioning the coefficient and data matrices to facilitate parallel processing [52]- [55]. These partitioning strategies have greatly sped up the network coding computations (and reduced the energy consumption [56], [57]).…”
Section: Related Work On Network Coding Computationsmentioning
confidence: 99%
See 1 more Smart Citation
“…We note for completeness that a custom very large scale integration (VLSI) design for network coding has been studied in [51]. Computational strategies for GF (2 8 ) network coding on general-purpose multicore CPUs have mainly focused on judiciously partitioning the coefficient and data matrices to facilitate parallel processing [52]- [55]. These partitioning strategies have greatly sped up the network coding computations (and reduced the energy consumption [56], [57]).…”
Section: Related Work On Network Coding Computationsmentioning
confidence: 99%
“…Some additional speed up can be achieved by scheduling the matrix block operations according to the dependency structure of the computations in a DAG [21]. A key drawback of the DAG approach in [21] is that it is limited to non-progressive decoding of a full generation of coded packets; whereas, most of the partitioning approaches [52]- [55] are suitable for progressive RLNC decoding. The present study seeks to bring the benefits of DAG scheduling to progressive RLNC decoding.…”
Section: Related Work On Network Coding Computationsmentioning
confidence: 99%
“…A number of works have considered ways to improve the throughput and reduce the decoding latency of RNC [3][4][5][6][7][8][9][10][11][12][13]. RNC was proposed by Ho et al [1] as a capacity-achieving scheme for multicasting in wired networks following the work by Ahlswede et al [2] who first suggested network coding and showed its utility in wireline networks.…”
Section: Related Workmentioning
confidence: 99%
“…Otherwise, users of the system cannot experience the benefit of RNC. To overcome the computational overhead of RNC, a number study focused on the performance of RNC [3][4][5][6][7][8][9][10][11][12][13]; recent advances have succeeded in mitigating the concerns. According to one recent study [13], the throughput of an optimized RNC decoder on smartphone platforms is over 1 Gbps, which well exceeds H.264 1080p30 FHD bandwidth requirement of 30 Mbps.…”
Section: Introductionmentioning
confidence: 99%
“…Associated applications were gathered, and their needs merged into three so-called requirement profiles as part of the HiFlecs project [24]. Corresponding needs were gathered from all ZDKI projects and scientifically evaluated by an associated research study [25].…”
Section: Introductionmentioning
confidence: 99%