2020
DOI: 10.1007/978-3-030-64313-3_20
|View full text |Cite
|
Sign up to set email alerts
|

Insect Inspired View Based Navigation Exploiting Temporal Information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1
1

Relationship

4
2

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 25 publications
0
4
0
Order By: Relevance
“…Algorithmically, a temporal window can be introduced, where only a limited amount of the sequentially stored snapshots is used for familiarity detection. This way, general aliasing can be minimised as well as more complex routes followed (Kagioulis et al, 2021).…”
Section: Discussionmentioning
confidence: 99%
“…Algorithmically, a temporal window can be introduced, where only a limited amount of the sequentially stored snapshots is used for familiarity detection. This way, general aliasing can be minimised as well as more complex routes followed (Kagioulis et al, 2021).…”
Section: Discussionmentioning
confidence: 99%
“…Follow-up robotic studies have explored extensions ranging from the compact encoding of visual scenes [113], to scanning-free route following [114], applicability to aerial navigation [115], and even the use of temporal cues to improve visual place recognition [116,117]. Moreover, direct insights were drawn towards the function of the observed learning walks and flights of insects [118] as they begin foraging [119].…”
Section: Mental Faculties: Revealing the Neural Basis Of Invertebrate...mentioning
confidence: 99%
“…Recent results showing that lesioning the MB in ants specifically affects performance on tasks requiring learned (but not innate) visual orientation have supported this assumption (42)(43)(44). However, computational models based on the MB (for both olfactory and visual learning) have also mostly used static input patterns (28,(45)(46)(47)(48)(49) and, for route following, have evaluated performances in somewhat simplified visual environments with little of the variability that occurs in the real world. Our current model, by contrast, addresses the problem of learning and recognizing, on repeated traversal, the pattern of input spikes produced from visual change detection using an event camera (Fig.…”
Section: Introductionmentioning
confidence: 99%