2019 International Joint Conference on Neural Networks (IJCNN) 2019
DOI: 10.1109/ijcnn.2019.8852098
|View full text |Cite
|
Sign up to set email alerts
|

Self-organizing neurons: toward brain-inspired unsupervised learning

Abstract: During the last years, Deep Neural Networks have reached the highest performances in image classification. Nevertheless, such a success is mostly based on supervised and off-line learning: they require thus huge labeled datasets for learning, and once it is done, they cannot adapt to any change in the data from the environment. In the context of brain-inspired computing, we apply Kohonen-based Self-Organizing Maps for unsupervised learning without labels, and we explore original extensions such as the Dynamic … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
31
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 17 publications
(32 citation statements)
references
References 18 publications
1
31
0
Order By: Relevance
“…However, classification is based on either supervised or semi-supervised learning, as mapping multi-sensory modalities is not sufficient: we need to know the corresponding class to each activation pattern. We proposed in Reference [52] a labeling method summarized in Section 3.1.2 based on very few labeled data, so that we do not use any label in the learning process as explained in Section 3.1. The same approach is used in Reference [48], but the authors rely on the complete labeled dataset, as further discussed in Section 5.4.1.…”
Section: Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…However, classification is based on either supervised or semi-supervised learning, as mapping multi-sensory modalities is not sufficient: we need to know the corresponding class to each activation pattern. We proposed in Reference [52] a labeling method summarized in Section 3.1.2 based on very few labeled data, so that we do not use any label in the learning process as explained in Section 3.1. The same approach is used in Reference [48], but the authors rely on the complete labeled dataset, as further discussed in Section 5.4.1.…”
Section: Discussionmentioning
confidence: 99%
“…In this section, we summarise our previous work on SOM post-labeled unsupervised learning [52], then propose the Reentrant Self-Organizing Map (ReSOM) shown in Figure 2 for learning multimodal associations, labeling one modality based on the other and converge the two modalities with cooperation and competition for a better classification accuracy. We use SOMs and Hebbian-like learning sequentially to perform multimodal learning: first, unimodal representations are obtained with SOMs and, second, multimodal representations develop through the association of unimodal maps via bidirectional synapses.…”
Section: Proposed Model: Reentrant Self-organizing Map (Resom)mentioning
confidence: 99%
See 3 more Smart Citations