The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2020
DOI: 10.3389/frobt.2020.00063
|View full text |Cite
|
Sign up to set email alerts
|

Symbolic Representation and Learning With Hyperdimensional Computing

Abstract: It has been proposed that machine learning techniques can benefit from symbolic representations and reasoning systems. We describe a method in which the two can be combined in a natural and direct way by use of hyperdimensional vectors and hyperdimensional computing. By using hashing neural networks to produce binary vector representations of images, we show how hyperdimensional vectors can be constructed such that vector-symbolic inference arises naturally out of their output. We design the Hyperdimensional I… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
23
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(27 citation statements)
references
References 21 publications
(18 reference statements)
4
23
0
Order By: Relevance
“…When compared to the standard aggregation methods in (mobile robotics) place recognition experiments, HVs of the aggregated descriptors demonstrated average performance better than alternative methods (except the exhaustive pair-wise comparison). A very similar concept was demonstrated in [Mitrokhin et al, 2020] using an image classification task, see also Table 15. One of the proposed ways of forming image HV used the superposition of three binary HVs obtained from three different hashing neural networks.…”
Section: Similarity Estimation Of Imagessupporting
confidence: 55%
See 2 more Smart Citations
“…When compared to the standard aggregation methods in (mobile robotics) place recognition experiments, HVs of the aggregated descriptors demonstrated average performance better than alternative methods (except the exhaustive pair-wise comparison). A very similar concept was demonstrated in [Mitrokhin et al, 2020] using an image classification task, see also Table 15. One of the proposed ways of forming image HV used the superposition of three binary HVs obtained from three different hashing neural networks.…”
Section: Similarity Estimation Of Imagessupporting
confidence: 55%
“…For example, as mentioned in Section 3.4.3 in [Kleyko et al, 2021c], it is very common to use activations of convolutional neural networks to form HVs of images. This is commonly done using the standard pre-trained neural networks [Yilmaz, 2015b], [Mitrokhin et al, 2020], . Two challenges here are to increase the dimensionality and change the format of the neural network representations to conform with the HV format requirements.…”
Section: The Use Of Neural Network For Producing Hvsmentioning
confidence: 99%
See 1 more Smart Citation
“…Besides the few shot classification task that we highlighted in this work, there are several tantalizing prospects for the HD learned patterns in the key memory. They form vector-symbolic representations that can directly be used for reasoning, or multimodal fusion across separate networks 38 . The key-value memory also becomes the central ingredient in many recent models for unsupervised and contrastive learning [39][40][41] where a huge number of prototype vectors should be efficiently stored, compared, compressed, and retrieved.…”
Section: Resultsmentioning
confidence: 99%
“…The human brain remains the most sophisticated processing component that has ever existed. The ever-growing research in biological vision, cognitive psychology, and neuroscience has given rise to many concepts that have led to prolific advancement in artificial intelligent accomplishing cognitive tasks [41][42][43].…”
Section: Brain-inspired Computing Modelsmentioning
confidence: 99%