2008
DOI: 10.1016/j.neunet.2007.10.005
|View full text |Cite
|
Sign up to set email alerts
|

A biologically motivated visual memory architecture for online learning of objects

Abstract: We present a biologically motivated architecture for object recognition that is based on a hierarchical feature detection model in combination with a memory architecture that implements short-term and long-term memory for objects. A particular focus is the functional realization of online and incremental learning for the task of appearance-based object recognition of many complex-shaped objects. We propose some modifications of learning vector quantization algorithms that are especially adapted to the task of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2009
2009
2020
2020

Publication Types

Select...
5
3

Relationship

5
3

Authors

Journals

citations
Cited by 27 publications
(15 citation statements)
references
References 45 publications
0
15
0
Order By: Relevance
“…We use the learning technique proposed in [23] that has gained much attention recently in the context of big data and interpretable models due to its flexibility and intuitive classification scheme, see e. g. [33], [34], [35], [36], [37], [38], [39], [40], [41], [42]. Essentially, it offers an efficient way for prototypebased data classification.…”
Section: Learning Vector Quantizationmentioning
confidence: 99%
“…We use the learning technique proposed in [23] that has gained much attention recently in the context of big data and interpretable models due to its flexibility and intuitive classification scheme, see e. g. [33], [34], [35], [36], [37], [38], [39], [40], [41], [42]. Essentially, it offers an efficient way for prototypebased data classification.…”
Section: Learning Vector Quantizationmentioning
confidence: 99%
“…The final number of allocated nodes w k and the assigned category labels u k corresponds to the difficulty of the different categories itself but also to the within-category variance. Finally the long-term stability of these incrementally learned nodes is considered based on an individual node learning rate Θ k as proposed in [7].…”
Section: Category Learning Vector Quantizationmentioning
confidence: 99%
“…Similar to Step 1 we test new LVQ nodes only for erroneous categories. In contrast to the node insertion rule proposed in [7], where nodes are inserted for training vectors with smallest distance to wrong winning nodes, we propose to insert new LVQ nodes based on training vectors x i with most categorization errors. This leads to a more compact representation, because a single node typically improves the representation of several categories.…”
Section: Learning Dynamicsmentioning
confidence: 99%
“…Thus stability can be better achieved compared to the multi-layer perceptron (MLP), where all weights are modified at each learning step. Additionally for life-long learning architectures often a node specific learning rate combined with an incremental node insertion rule (Hamker, 2001;Furao & Hasegawa, 2006;Kirstein et al, 2008) is used to approach the "stability plasticity dilemma". The major drawback of those identification architectures is that they inefficiently separate co-occurring classes.…”
Section: B Life-long Learning Architecturesmentioning
confidence: 99%