2018
DOI: 10.48550/arxiv.1807.08583
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Hardware Optimizations of Dense Binary Hyperdimensional Computing: Rematerialization of Hypervectors, Binarized Bundling, and Combinational Associative Memory

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Its relatively small size makes the timing simulation computationally tractable. We also selected another machine learning algorithm based on computing with hyperdimensional (HD) [49] vectors to detect two face/non-face classes among 10,000 web faces of a face detection dataset (FACE) from Caltech [50]. Fig.…”
Section: Resultsmentioning
confidence: 99%
“…Its relatively small size makes the timing simulation computationally tractable. We also selected another machine learning algorithm based on computing with hyperdimensional (HD) [49] vectors to detect two face/non-face classes among 10,000 web faces of a face detection dataset (FACE) from Caltech [50]. Fig.…”
Section: Resultsmentioning
confidence: 99%
“…In the dense binary coding [16], a letter hypervector has an approximately equal number of randomly placed 1s and 0s, hence the 27 hypervectors are approximately orthogonal to each other. As another alternative, mapping to binary hypervectors can be realized by rematerialization [38] e.g., by using a cellular automaton exhibiting exhibiting chaotic behaviour [39].…”
Section: Mapping and Encoding Modulementioning
confidence: 99%