2005
DOI: 10.1007/0-387-28863-5_8
|View full text |Cite
|
Sign up to set email alerts
|

Models of Sound Localization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
16
0

Year Published

2007
2007
2021
2021

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(16 citation statements)
references
References 121 publications
0
16
0
Order By: Relevance
“…A number of systems have taken a similar approach [6]- [10]. Localization in azimuth is a popular cue for segregating sound sources [11]. Spectral masking, sometimes called time-frequency masking, binary masking, or ideal binary masking, allows the separation of an arbitrary number of sources from a mixture, by assuming that a single source is active at every time-frequency point.…”
Section: A Backgroundmentioning
confidence: 99%
See 1 more Smart Citation
“…A number of systems have taken a similar approach [6]- [10]. Localization in azimuth is a popular cue for segregating sound sources [11]. Spectral masking, sometimes called time-frequency masking, binary masking, or ideal binary masking, allows the separation of an arbitrary number of sources from a mixture, by assuming that a single source is active at every time-frequency point.…”
Section: A Backgroundmentioning
confidence: 99%
“…Many models of mammalian auditory localization have been described in the literature, see [11] for a review. Most focus on localization within individual critical bands of the auditory system and are either based on cross-correlation [14] or the equalization-cancellation model [15], [16].…”
Section: A Backgroundmentioning
confidence: 99%
“…Most acoustic cue-based approaches are limited to specific computational strategies that use exclusively the head related transfer functions (HRTFs), i.e. the direction-specific acoustic filtering by the pinnae and the head (Colburn & Kulkarni, 2005). Standard modeling approaches rely on acoustic information alone, and so cannot explain the effects of motor actions and other sensory modalities on the computation.…”
Section: Introductionmentioning
confidence: 99%
“…[2]). Processing motivated by human binaural perception, which utilizes spatial cues including interaural time difference (ITD) and interaural intensity difference (IID) [3,4,5], has long been thought to be useful for separating sound sources from different directions and for coping with the effects of reverberation, and this approach is now being extended to speech recognition (e.g. [6]).…”
Section: Introductionmentioning
confidence: 99%