2021 IEEE 29th International Conference on Network Protocols (ICNP) 2021
DOI: 10.1109/icnp52444.2021.9651946
|View full text |Cite
|
Sign up to set email alerts
|

NetFC: Enabling Accurate Floating-point Arithmetic on Programmable Switches

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 20 publications
0
6
0
Order By: Relevance
“…The worker deems the gradient packet has been lost, so it retransmits the packet. Without a local 3 While this on-chip memory demand is acceptable in practice, we can also leverage the prefix-based compression proposed in [15] to further reduce the memory consumption. recording in the switch, the switch will wrongly aggregate the gradients twice.…”
Section: System Reliabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…The worker deems the gradient packet has been lost, so it retransmits the packet. Without a local 3 While this on-chip memory demand is acceptable in practice, we can also leverage the prefix-based compression proposed in [15] to further reduce the memory consumption. recording in the switch, the switch will wrongly aggregate the gradients twice.…”
Section: System Reliabilitymentioning
confidence: 99%
“…Rather than using the aforementioned floating-to-integer method that may introduce accuracy loss, we propose a table-lookup method that enables the on-the-fly floating-point summation for 32-bit floats. Our method is inspired by [15] that implements floatingpint operations for 16-bit floats.…”
mentioning
confidence: 99%
“…Most of the programmable data plane architecture [63] can not support complex mathematical operations such as multiplication and division. The solution is to adopt the approximation techniques [64] of bit-shifting with adders and table lookups. In addition, more complex operations such as exponential and logarithmic can also be realized [45] based on the approximation techniques with binomial series expansion.…”
Section: B P4mentioning
confidence: 99%
“…Recently, new methods for computing elementary operations log and exp have been proposed in [30]. We can also mention [31] that implements floating point arithmetic in-network with 99.94% accuracy in the worst case. N2Net [32] and BaNANA Split [33] have shown implementations of binary neural networks on the dataplane.…”
Section: B In-network Real Value Computationmentioning
confidence: 99%