2017
DOI: 10.1080/01691864.2017.1356746
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchically self-organizing visual place memory

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 37 publications
0
6
0
Order By: Relevance
“…This knowledge may have been accumulated either through direct experience 1 or through previous rounds of learning from other robots. The accumulation of knowledge in the long-term place memory has been presented previously in Erkent et al (2017). Two of its features are integral to the merging of appearance-based place knowledge.…”
Section: Long-term Place Memorymentioning
confidence: 99%
See 1 more Smart Citation
“…This knowledge may have been accumulated either through direct experience 1 or through previous rounds of learning from other robots. The accumulation of knowledge in the long-term place memory has been presented previously in Erkent et al (2017). Two of its features are integral to the merging of appearance-based place knowledge.…”
Section: Long-term Place Memorymentioning
confidence: 99%
“…Each robot is assumed to have compatible visual sensing and to retain its appearance-based place knowledge in its long-term place memory. This is a memory in which each place refers to a spatial region as defined by a collection of appearances and the knowledge of all learned places is organized in a tree hierarchy (Erkent et al 2017). In the proposed approach, place knowledge is merged based on long-term place memories.…”
Section: Introductionmentioning
confidence: 99%
“…We 3.7 contains additional comparative results with two unsupervised methods. Our higher performance as compared to the work of [50] is justified due to the usage of local SURF features from images instead of the bubble descriptors. Even though [52] used sophisticated image descriptors, their method achieves lower accuracy than the proposed system since their parameters for the Self-Organizing Map do not generalize in each dataset.…”
Section: Comparative Resultsmentioning
confidence: 97%
“…Visual features of a robot's traversed environment have been represented through the BoW model and, by means of a Neural Gas, the spatial information for each scene is clustered into semantically consistent groups [2]. Finally, the method proposed by [50] was based on the Single-Linkage (SLINK) agglomerative algorithm [51]. This unsupervised and incremental approach allows the robot to learn about organizing the observed environment and localizing in it.…”
Section: Semantic Information and Mappingmentioning
confidence: 99%
See 1 more Smart Citation