2015
DOI: 10.48550/arxiv.1504.06779
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Computational Cost Reduction in Learned Transform Classifications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2015
2015
2015
2015

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…The approach still requires a real-valued, full precision training phase, however, so the benefits of reducing computations does not apply to training. Similarly, Machado et al (2015) manage to get acceptable accuracy on sparse representation classification by replacing all floating-point multiplications by integer shifts. Bit-stream networks (Burge et al, 1999) also provides a way of binarizing neural network connections, by substituting weight connections with logical gates.…”
Section: Introductionmentioning
confidence: 99%
“…The approach still requires a real-valued, full precision training phase, however, so the benefits of reducing computations does not apply to training. Similarly, Machado et al (2015) manage to get acceptable accuracy on sparse representation classification by replacing all floating-point multiplications by integer shifts. Bit-stream networks (Burge et al, 1999) also provides a way of binarizing neural network connections, by substituting weight connections with logical gates.…”
Section: Introductionmentioning
confidence: 99%