ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2022
DOI: 10.1109/icassp43922.2022.9747328
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Binary Neural Network by Learning Channel-Wise Thresholds

Abstract: Binary neural networks (BNNs) constrain weights and activations to +1 or -1 with limited storage and computational cost, which is hardware-friendly for portable devices. Recently, BNNs have achieved remarkable progress and been adopted into various fields. However, the performance of BNNs is sensitive to activation distribution. The existing BNNs utilized the Sign function with predefined or learned static thresholds to binarize activations. This process limits representation capacity of BNNs since different s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 15 publications
0
8
0
Order By: Relevance
“…To boost feature expression capabilities, J. Zhang et al [90] replaced the static RSign and RPReLU in the ReActNet [87] with Dynamic Sign (DySign) and Dynamic PReLU (DyPReLU). Also, in [91], the authors proposed Binarized Ghost Module (BGM) as a modification for the ReActNet [87] to improve the feature maps information.…”
Section: C: Gradient Error Minimizationmentioning
confidence: 99%
“…To boost feature expression capabilities, J. Zhang et al [90] replaced the static RSign and RPReLU in the ReActNet [87] with Dynamic Sign (DySign) and Dynamic PReLU (DyPReLU). Also, in [91], the authors proposed Binarized Ghost Module (BGM) as a modification for the ReActNet [87] to improve the feature maps information.…”
Section: C: Gradient Error Minimizationmentioning
confidence: 99%
“…Further innovation can be made to meet such inspiring target. For example, fusing PDC to binary neural networks [15,32,51]. -PDC enjoys the general form to organize the encoding of pixel differences, making it possible to incorporate any LBP variant to the convolutional module.…”
Section: Future Work and Conclusionmentioning
confidence: 99%
“…Similarly, Bi-CPDC, Bi-APDC, and Bi-RPDC can be derived using different probing strategies. Generally, previous BCNNs [37,90,108] adopted a shared threshold τ over the whole image during binarization without considering local content variations, leading to the irreversible loss of high-order image details. As illustrated in Fig.…”
Section: Formulations Of Bi-pdcmentioning
confidence: 99%