2017 International Joint Conference on Neural Networks (IJCNN) 2017
DOI: 10.1109/ijcnn.2017.7966076
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning approach to link weight prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…In this section, we further explore the performance of our method on the well-known model (SVHN) used in LAQ (Hou and Kwok 2018). Table 5 shows the test accuracy on SVHN.…”
Section: Experimental Results On Svhnmentioning
confidence: 99%
See 2 more Smart Citations
“…In this section, we further explore the performance of our method on the well-known model (SVHN) used in LAQ (Hou and Kwok 2018). Table 5 shows the test accuracy on SVHN.…”
Section: Experimental Results On Svhnmentioning
confidence: 99%
“…To overcome the above limitation, the research (Hou et al 2017) proposes a loss-aware quantization approach, directly optimizing binarized weights to minimize the final loss. The following work (Hou and Kwok 2018) extends this binarization scheme to m-bit quantization, but Peng et al (Peng et al 2021) have discovered that the above approaches under certain conditions may fail to converge due to the quantization error. Zhou et al (Zhou et al 2018) have proposed an explicit loss-aware weight quantization method by integrating the information of loss function with respect to full-precision weights into the quantization.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation