Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2021
DOI: 10.1109/tsp.2021.3086355
|View full text |Cite
|
Sign up to set email alerts
|

Towards Lower Precision Adaptive Filters: Facts From Backward Error Analysis of RLS

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 37 publications
0
0
0
Order By: Relevance
“…The accuracy of TinyOFL is often limited due to its restricted model capacity. However, the accuracy is not affected by weight quantization if the quantization noise is relatively lower than the data noise [21]. The feature map using quantized weights, fi (x i ), on the input feature x i at the i-th layer can be deviated by δy i due to the weight quantization as follows:…”
Section: B Weight Quantization Effects On Accuracymentioning
confidence: 99%
See 3 more Smart Citations
“…The accuracy of TinyOFL is often limited due to its restricted model capacity. However, the accuracy is not affected by weight quantization if the quantization noise is relatively lower than the data noise [21]. The feature map using quantized weights, fi (x i ), on the input feature x i at the i-th layer can be deviated by δy i due to the weight quantization as follows:…”
Section: B Weight Quantization Effects On Accuracymentioning
confidence: 99%
“…where f i (x i ) is the feature map operation on x i using nonquantized weights. The fi (x i ) generates the quantity δq i based on the backward error analysis [21], [22]:…”
Section: B Weight Quantization Effects On Accuracymentioning
confidence: 99%
See 2 more Smart Citations