2022
DOI: 10.1088/2634-4386/ac6b50
|View full text |Cite
|
Sign up to set email alerts
|

Event driven bio-inspired attentive system for the iCub humanoid robot on SpiNNaker

Abstract: Attention leads the gaze of the observer towards interesting items, allowing a detailed analysis only for selected regions of a scene. A robot can take advantage of the perceptual organisation of the features in the scene to guide its attention to better understand its environment. Current bottom–up attention models work with standard RGB cameras requiring a significant amount of time to detect the most salient item in a frame-based fashion. Event-driven cameras are an innovative technology to asynchronously d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 51 publications
(81 reference statements)
0
8
0
Order By: Relevance
“…All of these implementations brought to the design of the figure-ground organisation model detecting protoobjects and figure-ground assignment exploiting the feedback and feedforward connections and filtering with the Von Mises kernel [21]. The proposed model takes inspiration from [21] and it is a follow-up of previous event-driven attempts to detect proto-objects [22,48,49]. The first event-based implementation of [25] has been proposed by Iacono et.…”
Section: Figure-ground Modelsmentioning
confidence: 99%
“…All of these implementations brought to the design of the figure-ground organisation model detecting protoobjects and figure-ground assignment exploiting the feedback and feedforward connections and filtering with the Von Mises kernel [21]. The proposed model takes inspiration from [21] and it is a follow-up of previous event-driven attempts to detect proto-objects [22,48,49]. The first event-based implementation of [25] has been proposed by Iacono et.…”
Section: Figure-ground Modelsmentioning
confidence: 99%
“…Table II compares our contribution with the neuromorphic saliency models implemented in [17], [19] as well as with our initial model [5]. The data presented here were either retrieved directly from the information given by the authors or calculated from the description of each model.…”
Section: E Comparison With State-of-the-artmentioning
confidence: 99%
“…TABLE II: Comparison between our contribution and the state-of-the-art, with input data of size w × h (w the width and h the height), OL the overlapping percentage described in [17] and div the dividing factor between the input layer and the saliency detector in [5]. A numerical value was calculated for each theoretical estimation, for w = h = 128, OL = 5% and div = 16.…”
Section: E Comparison With State-of-the-artmentioning
confidence: 99%
See 1 more Smart Citation
“…This poses strong timing requirements on the—simulated—nervous system, which needs to interact with its surroundings in realtime. Especially for biologically inspired spiking neural networks (SNNs), this prerequisite has led to the deployment of specialized neuromorphic accelerators for neurorobotic tasks (Richter et al, 2016 ; Blum et al, 2017 ; Milde et al, 2017 ; Kreiser et al, 2018 ; Yan et al, 2021 ; DAngelo et al, 2022 ). These systems integrate specific analog or digital modules for the efficient simulation—or emulation —of spiking neurons and are thereby capable of running such networks at biological realtime.…”
Section: Introductionmentioning
confidence: 99%