2006 IEEE/RSJ International Conference on Intelligent Robots and Systems 2006
DOI: 10.1109/iros.2006.281849
|View full text |Cite
|
Sign up to set email alerts
|

Sound Localization for Humanoid Robots - Building Audio-Motor Maps based on the HRTF

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
74
0
1

Year Published

2009
2009
2017
2017

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 94 publications
(75 citation statements)
references
References 15 publications
0
74
0
1
Order By: Relevance
“…Such an idea was successfully assessed in [29] through a dedicated neural network able to generalize learning to new acoustic conditions. One can also cite [30], or [31], where the iCub humanoid robot's head was endowed with two pinnae. The localization is performed by mapping the aforementioned sound features to the corresponding location of the source through a learning method.…”
Section: Horizontal Localizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Such an idea was successfully assessed in [29] through a dedicated neural network able to generalize learning to new acoustic conditions. One can also cite [30], or [31], where the iCub humanoid robot's head was endowed with two pinnae. The localization is performed by mapping the aforementioned sound features to the corresponding location of the source through a learning method.…”
Section: Horizontal Localizationmentioning
confidence: 99%
“…Reproducing such capabilities in Robotics is a difficult problem due to the lack of a model of the pinnae shapes which lead to elevation dependent notches. Yet, as a rule of thumb, these shapes must be irregular or asymmetric, and artificial pinnae were proposed in [37], [38], [31], [35], or [39]. Fig.…”
Section: Vertical Localization: Spectral Cuesmentioning
confidence: 99%
“…The auditory sensor consist of two microphones and artificial pinnas [26]. To get a more compact representation of the sound signal it is transformed into a tonotopic sound representation (sound features).…”
Section: Sensing Unitsmentioning
confidence: 99%
“…We note that the vast majority of current SSL approaches mainly focus on a rough estimation of the azimuth, or one-dimensional (1D) localization [9,5,10,7], and that very few perform 2D localization [8]. Alternatively, some approaches [6,11,12] bypass the explicit mapping model and perform 2D localization using an exhaustive search in an HRTF look-up table. However, this process is unstable and hardly scalable in practice as the number of required associations yields too prohibitive memory and computational costs.…”
Section: Introductionmentioning
confidence: 99%