2019
DOI: 10.1007/978-3-030-30048-7_24
|View full text |Cite
|
Sign up to set email alerts
|

Training Binarized Neural Networks Using MIP and CP

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
33
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 16 publications
(34 citation statements)
references
References 13 publications
1
33
0
Order By: Relevance
“…In Section 4.2, we experimentally demonstrated that modeling ð instead of significantly improves the test performance of our MIP model. Our results suggest that similar works on training Binarized Neural Networks [11] using MIP models [25] might also benefit from similar explicit modeling of function ð.…”
Section: Interpretability Resultsmentioning
confidence: 66%
“…In Section 4.2, we experimentally demonstrated that modeling ð instead of significantly improves the test performance of our MIP model. Our results suggest that similar works on training Binarized Neural Networks [11] using MIP models [25] might also benefit from similar explicit modeling of function ð.…”
Section: Interpretability Resultsmentioning
confidence: 66%
“…Whereas a neural network is not typically trained through constrained optimization, we believe that our approach is more easily understood under such a mindset, which aligns with further work emerging from this community [8,31,15].…”
Section: Introductionmentioning
confidence: 60%
“…Other researchers tried using information theory ideas in order to determine the optimal neural network size by having a tradeoff between complexity and a training error using second derivative information, which includes removing unimportant weights [ 184 ]. Researchers in [ 185 ] proposed a new method to train binarized neural networks at run-time; during forward propagation, this method greatly reduces the required memory size and replaces most operations with bit-wise operations [ 186 ]. However, binary weights were also proposed in [ 187 ], where researchers tried to replace the simple accumulations of several multiply-accumulate operations because multipliers took up most of the space and are considered power-hungry components when digital neural network is implemented.…”
Section: Deep Learning Solutions For Iot Data Compressionmentioning
confidence: 99%