DAFX: Digital Audio Effects 2011
DOI: 10.1002/9781119991298.ch9
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Digital Audio Effects

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0
1

Year Published

2016
2016
2022
2022

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 25 publications
0
6
0
1
Order By: Relevance
“…The high-level signal flow of an adaptive effect is depicted in Figure 8. The construction of an adaptive digital audio effects includes three steps [137]:…”
Section: Adaptive Digital Audio Effectsmentioning
confidence: 99%
See 2 more Smart Citations
“…The high-level signal flow of an adaptive effect is depicted in Figure 8. The construction of an adaptive digital audio effects includes three steps [137]:…”
Section: Adaptive Digital Audio Effectsmentioning
confidence: 99%
“…1. the analysis/feature extraction aspect 2. the mapping between features and effects parameters 3. the transformation and resynthesis aspect of the digital audio effect Adaptive digital audio effects may be classified in in the following categories [137]:…”
Section: Adaptive Digital Audio Effectsmentioning
confidence: 99%
See 1 more Smart Citation
“…Things like the amplitude-controlled spatial granulation described above, and in general new spatialisation approaches along the lines of 'Adaptive Digital Audio Effects' (Verfaille and Arfib 2001) are just one example of such an area. One form this has taken in the BEASTmulch project has been the question of where to draw the line between using the library and using the system application.…”
Section: Conclusion and Caveatsmentioning
confidence: 99%
“…Whilst the axes of the two-dimensional space are somewhat arbitrary, underlying timbral characteristics are projected onto the space via a training stage using two-term musical semantics data. In addition to this, we propose a signal processing method of adapting the parameter modulation process to the incoming audio data based on feature extraction applied to the long-term average spectrum (LTAS), as detailed in [17][18][19], capable of running in near-real-time. The model is implemented using the SAFE architecture (detailed in [20]), and is provided as an extension of the current Semantic Audio Parametric Equaliser (available for download at [21]), as shown in Figure 1a.…”
Section: Aimsmentioning
confidence: 99%