2012 IEEE/RSJ International Conference on Intelligent Robots and Systems 2012
DOI: 10.1109/iros.2012.6385565
|View full text |Cite
|
Sign up to set email alerts
|

Integration of sound source localization and separation to improve Dialogue Management on a robot

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…This Reliability-Weighted Phase Transform (RWPhaT) strategy results in a new adaptive frequency weight Ψ(f ). This GCC strategy is still used in [86], on a 8-microphone array embedded on the Spartacus robot, to show the efficiency of a complete artificial audition system for speech recognition and dialogue management. Different adaptations of the PhaT processor have also been proposed.…”
Section: Musicmentioning
confidence: 99%
“…This Reliability-Weighted Phase Transform (RWPhaT) strategy results in a new adaptive frequency weight Ψ(f ). This GCC strategy is still used in [86], on a 8-microphone array embedded on the Spartacus robot, to show the efficiency of a complete artificial audition system for speech recognition and dialogue management. Different adaptations of the PhaT processor have also been proposed.…”
Section: Musicmentioning
confidence: 99%
“…The following step in our research is to explore the use of SSL for improving the performance of ASR. Some interesting examples in this direction are presented in [35], [36], and [37]. These approaches make use of microphone arrays to localize the speech sources in the environment.…”
Section: B Computational Background and Related Workmentioning
confidence: 99%
“…Speech-based elements of teleoperation include speech understanding/synthesis (Marin, Vila, Sanz, & Marzal, 2002) and scripted speech acts wherein the humanoid can be controlled by issuing speech commands (Lu, Liu, Chen, & Huang, 2010). Additionally, speech recognition (supported by sound localization when environments are very noisy) can be used by a mobile robot to enable it to interact with bystanders/clients in the environment (Fréchette, Létourneau, Valin, & Michaud, 2012;Valin, Yamamoto, et al, 2007;Yamamoto et al, 2007).…”
Section: Sight and Visualizationmentioning
confidence: 99%