Abstract:It has been proposed that machine learning techniques can benefit from symbolic representations and reasoning systems. We describe a method in which the two can be combined in a natural and direct way by use of hyperdimensional vectors and hyperdimensional computing. By using hashing neural networks to produce binary vector representations of images, we show how hyperdimensional vectors can be constructed such that vector-symbolic inference arises naturally out of their output. We design the Hyperdimensional I… Show more
“…When compared to the standard aggregation methods in (mobile robotics) place recognition experiments, HVs of the aggregated descriptors demonstrated average performance better than alternative methods (except the exhaustive pair-wise comparison). A very similar concept was demonstrated in [Mitrokhin et al, 2020] using an image classification task, see also Table 15. One of the proposed ways of forming image HV used the superposition of three binary HVs obtained from three different hashing neural networks.…”
Section: Similarity Estimation Of Imagessupporting
confidence: 55%
“…For example, as mentioned in Section 3.4.3 in [Kleyko et al, 2021c], it is very common to use activations of convolutional neural networks to form HVs of images. This is commonly done using the standard pre-trained neural networks [Yilmaz, 2015b], [Mitrokhin et al, 2020], . Two challenges here are to increase the dimensionality and change the format of the neural network representations to conform with the HV format requirements.…”
Section: The Use Of Neural Network For Producing Hvsmentioning
confidence: 99%
“…Two challenges here are to increase the dimensionality and change the format of the neural network representations to conform with the HV format requirements. Some neural networks already produce binary vectors (see [Mitrokhin et al, 2020]), and the transformation to HVs was in randomly repeated those components to get the necessary dimensionality. In [Karunaratne et al, 2021b], the authors first guided a convolutional neural network to produce HDC/VSA-conforming vectors with the aid of proper attention and sharpening functions, and then just used the sign function to transform those real-valued vectors to bipolar HVs (of the same dimensionality).…”
Section: The Use Of Neural Network For Producing Hvsmentioning
This is Part II of the two-part comprehensive survey devoted to a computing framework most commonly known under the names Hyperdimensional Computing and Vector Symbolic Architectures (HDC/VSA). Both names refer to a family of computational models that use high-dimensional distributed representations and rely on the algebraic properties of their key operations to incorporate the advantages of structured symbolic representations and vector distributed representations. Holographic Reduced Representations [Plate, 1995], [Plate, 2003] is an influential HDC/VSA model that is well-known in the machine learning domain and often used to refer to the whole family. However, for the sake of consistency, we use HDC/VSA to refer to the area.Part I of this survey [Kleyko et al., 2021c] covered foundational aspects of the area, such as historical context leading to the development of HDC/VSA, key elements of any HDC/VSA model, known HDC/VSA models, and transforming input data of various types into high-dimensional vectors suitable for HDC/VSA. This second part surveys existing applications, the role of HDC/VSA in cognitive computing and architectures, as well as directions for future work. Most of the applications lie within the machine learning/artificial intelligence domain, however we also cover other applications to provide a thorough picture. The survey is written to be useful for both newcomers and practitioners.
“…When compared to the standard aggregation methods in (mobile robotics) place recognition experiments, HVs of the aggregated descriptors demonstrated average performance better than alternative methods (except the exhaustive pair-wise comparison). A very similar concept was demonstrated in [Mitrokhin et al, 2020] using an image classification task, see also Table 15. One of the proposed ways of forming image HV used the superposition of three binary HVs obtained from three different hashing neural networks.…”
Section: Similarity Estimation Of Imagessupporting
confidence: 55%
“…For example, as mentioned in Section 3.4.3 in [Kleyko et al, 2021c], it is very common to use activations of convolutional neural networks to form HVs of images. This is commonly done using the standard pre-trained neural networks [Yilmaz, 2015b], [Mitrokhin et al, 2020], . Two challenges here are to increase the dimensionality and change the format of the neural network representations to conform with the HV format requirements.…”
Section: The Use Of Neural Network For Producing Hvsmentioning
confidence: 99%
“…Two challenges here are to increase the dimensionality and change the format of the neural network representations to conform with the HV format requirements. Some neural networks already produce binary vectors (see [Mitrokhin et al, 2020]), and the transformation to HVs was in randomly repeated those components to get the necessary dimensionality. In [Karunaratne et al, 2021b], the authors first guided a convolutional neural network to produce HDC/VSA-conforming vectors with the aid of proper attention and sharpening functions, and then just used the sign function to transform those real-valued vectors to bipolar HVs (of the same dimensionality).…”
Section: The Use Of Neural Network For Producing Hvsmentioning
This is Part II of the two-part comprehensive survey devoted to a computing framework most commonly known under the names Hyperdimensional Computing and Vector Symbolic Architectures (HDC/VSA). Both names refer to a family of computational models that use high-dimensional distributed representations and rely on the algebraic properties of their key operations to incorporate the advantages of structured symbolic representations and vector distributed representations. Holographic Reduced Representations [Plate, 1995], [Plate, 2003] is an influential HDC/VSA model that is well-known in the machine learning domain and often used to refer to the whole family. However, for the sake of consistency, we use HDC/VSA to refer to the area.Part I of this survey [Kleyko et al., 2021c] covered foundational aspects of the area, such as historical context leading to the development of HDC/VSA, key elements of any HDC/VSA model, known HDC/VSA models, and transforming input data of various types into high-dimensional vectors suitable for HDC/VSA. This second part surveys existing applications, the role of HDC/VSA in cognitive computing and architectures, as well as directions for future work. Most of the applications lie within the machine learning/artificial intelligence domain, however we also cover other applications to provide a thorough picture. The survey is written to be useful for both newcomers and practitioners.
“…Besides the few shot classification task that we highlighted in this work, there are several tantalizing prospects for the HD learned patterns in the key memory. They form vector-symbolic representations that can directly be used for reasoning, or multimodal fusion across separate networks 38 . The key-value memory also becomes the central ingredient in many recent models for unsupervised and contrastive learning [39][40][41] where a huge number of prototype vectors should be efficiently stored, compared, compressed, and retrieved.…”
Traditional neural networks require enormous amounts of data to build their complex mappings during a slow training procedure that hinders their abilities for relearning and adapting to new data. Memory-augmented neural networks enhance neural networks with an explicit memory to overcome these issues. Access to this explicit memory, however, occurs via soft read and write operations involving every individual memory entry, resulting in a bottleneck when implemented using the conventional von Neumann computer architecture. To overcome this bottleneck, we propose a robust architecture that employs a computational memory unit as the explicit memory performing analog in-memory computation on high-dimensional (HD) vectors, while closely matching 32-bit software-equivalent accuracy. This is achieved by a content-based attention mechanism that represents unrelated items in the computational memory with uncorrelated HD vectors, whose real-valued components can be readily approximated by binary, or bipolar components. Experimental results demonstrate the efficacy of our approach on few-shot image classification tasks on the Omniglot dataset using more than 256,000 phase-change memory devices. Our approach effectively merges the richness of deep neural network representations with HD computing that paves the way for robust vector-symbolic manipulations applicable in reasoning, fusion, and compression.
“…The human brain remains the most sophisticated processing component that has ever existed. The ever-growing research in biological vision, cognitive psychology, and neuroscience has given rise to many concepts that have led to prolific advancement in artificial intelligent accomplishing cognitive tasks [41][42][43].…”
Recently, brain-inspired computing models have shown great potential to outperform today's deep learning solutions in terms of robustness and energy efficiency. Particularly, Spiking Neural Networks (SNNs) and HyperDimensional Computing (HDC) have shown promising results in enabling efficient and robust cognitive learning. Despite the success, these two brain-inspired models have different strengths. While SNN mimics the physical properties of the human brain, HDC models the brain on a more abstract and functional level. Their design philosophies demonstrate complementary patterns that motivate their combination. With the help of the classical psychological model on memory, we propose SpikeHD, the first framework that fundamentally combines Spiking neural network and hyperdimensional computing. SpikeHD generates a scalable and strong cognitive learning system that better mimics brain functionality. SpikeHD exploits spiking neural networks to extract low-level features by preserving the spatial and temporal correlation of raw event-based spike data. Then, it utilizes HDC to operate over SNN output by mapping the signal into high-dimensional space, learning the abstract information, and classifying the data. Our extensive evaluation on a set of benchmark classification problems shows that SpikeHD provides the following benefit compared to SNN architecture:(1) significantly enhance learning capability by exploiting two-stage information processing, (2) enables substantial robustness to noise and failure, and (3) reduces the network size and required parameters to learn complex information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.