2008 IEEE International Conference on Robotics and Automation 2008
DOI: 10.1109/robot.2008.4543329
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal saliency-based bottom-up attention a framework for the humanoid robot iCub

Abstract: Abstract-This work presents a multimodal bottom-up attention system for the humanoid robot iCub where the robot's decisions to move eyes and neck are based on visual and acoustic saliency maps. We introduce a modular and distributed software architecture which is capable of fusing visual and acoustic saliency maps into one egocentric frame of reference. This system endows the iCub with an emergent exploratory behavior reacting to combined visual and auditory saliency. The developed software modules provide a f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
109
0
1

Year Published

2010
2010
2020
2020

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 112 publications
(111 citation statements)
references
References 21 publications
1
109
0
1
Order By: Relevance
“…shifting the focus of attention for efficient scene exploration (e.g. [8], [10]) and analysis (e.g. [14], [20], [21]) -has attracted increasing interest during the last years.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…shifting the focus of attention for efficient scene exploration (e.g. [8], [10]) and analysis (e.g. [14], [20], [21]) -has attracted increasing interest during the last years.…”
Section: Related Workmentioning
confidence: 99%
“…For active scene exploration, saliency can be used to steer the sensors towards salientthus potentially relevant -regions to detect objects of interest (e.g. [8], [10], [12]). Combining these methods, [14] utilized bottom-up attention, stereo vision and SIFT to perform robust and efficient scene analysis on a mobile robot.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…9). Whereas different methodologies can be used to define the target of the binocular fixation on the left and right image planes (Samarawickrama and Sabatini 2007; Ruesch et al 2008;Mishra et al 2009;Rea et al 2014;Antonelli et al 2014;Beuth and Hamker 2015), for the sake of simplicity we used a color segmentation based on the Water Shed algorithm (OpenCV implementation Bradski and Kaehler 2008). The algorithm computes a binary mask corresponding to a selected color, i.e.…”
Section: Integrated Vergence and Version Controlmentioning
confidence: 99%
“…the act of directing the sensors towards salient stimuli, and scene exploration for robotic applications has been addressed by several authors in recent years (see, e.g., [3], [5], [7], [9], [27]- [29]). The main difference between (covert) attention mechanisms that operate on still images, i.e.…”
Section: Related Workmentioning
confidence: 99%