Parametric Time‐Frequency Domain Spatial Audio 2017
DOI: 10.1002/9781119252634.ch6
|View full text |Cite
|
Sign up to set email alerts
|

Higher‐Order Directional Audio Coding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…The proposed solution is to use (23), (24) for n = N to extend the set of recurrences in (22) to the highest order. With normalization re-inserted using (21) and complex conjugation that affectsŶ m n , θ xy , θ * xy , the extending relations become…”
Section: Proposed Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The proposed solution is to use (23), (24) for n = N to extend the set of recurrences in (22) to the highest order. With normalization re-inserted using (21) and complex conjugation that affectsŶ m n , θ xy , θ * xy , the extending relations become…”
Section: Proposed Methodsmentioning
confidence: 99%
“…to analyze reverberation [20], [21], can benefit from the largest-possible number of detectable sources, in particular in time segments in which the echo density causes multiple, interfering image sources in the regarded time frame. With only slightly increased effort, the parametric higher-order directional audio coding and room impulse response rendering techniques (HO-DirAC [22], HO-SIRR [23]) efficiently extract multiple pseudo-intensity vectors. However, to cope with multiple sources, intensity vectors are extracted in directional sectors, therefore, sound field parameters are only obtained within those sectors.…”
Section: Introductionmentioning
confidence: 99%
“…This may involve a reduction in background noise and improved speech intelligibility [1,2], the preservation or modification of the perceived spatial properties of the scene [3,4], or an extension of the listeners' hearing abilities beyond the audible range [5]. Whereas, for the other use case, these enhancements may also find application, but with less stringent latency constraints, while also permitting additional spatial modifications [6,7] prior to reproducing the captured scene over the target playback setup. The topic of this article falls within this latter reproduction task for headphones, with the added goal of accounting for both the listener's head orientation and their position relative to the recording point, which is often collectively referred to as six-degrees-offreedom (6DoF) rendering based on sound-field extrapolation [8].…”
Section: Introductionmentioning
confidence: 99%