ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020
DOI: 10.1109/icassp40776.2020.9053474
|View full text |Cite
|
Sign up to set email alerts
|

Playing Technique Recognition by Joint Time–Frequency Scattering

Abstract: Playing techniques are important expressive elements in music signals. In this paper, we propose a recognition system based on the joint time-frequency scattering transform (jTFST) for pitch evolution-based playing techniques (PETs), a group of playing techniques with monotonic pitch changes over time. The jTFST represents spectro-temporal patterns in the time-frequency domain, capturing discriminative information of PETs. As a case study, we analyse three commonly used PETs of the Chinese bamboo flute: acciac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
16
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
1

Relationship

3
4

Authors

Journals

citations
Cited by 11 publications
(18 citation statements)
references
References 10 publications
(12 reference statements)
2
16
0
Order By: Relevance
“…This favorable result suggests that joint time-frequency scattering provides a useful feature map for learning similarities between instrumental playing techniques. In doing so, it is in line with a recent publication [74], in which the authors successfully trained a supervised classifier on joint time-frequency scattering features in order to detect and classify playing techniques from the Chinese bamboo flute (dizi). However, the originality of our work is that relies purely on auditory information (i.e., timbre similarity judgments), and does not require any supervision from the symbolic domain.…”
Section: Best Performing Systemsupporting
confidence: 53%
“…This favorable result suggests that joint time-frequency scattering provides a useful feature map for learning similarities between instrumental playing techniques. In doing so, it is in line with a recent publication [74], in which the authors successfully trained a supervised classifier on joint time-frequency scattering features in order to detect and classify playing techniques from the Chinese bamboo flute (dizi). However, the originality of our work is that relies purely on auditory information (i.e., timbre similarity judgments), and does not require any supervision from the symbolic domain.…”
Section: Best Performing Systemsupporting
confidence: 53%
“…This favorable result suggests that joint time-frequency scattering provides a useful feature map for learning similarities between instrumental playing techniques. In doing so, it is in line with a recent publication [71], in which the authors successfully trained a supervised classifier on joint time-frequency scattering features in order to detect and classify playing techniques from the Chinese bamboo flute (dizi). However, the originality of our work is that Φ relies purely on auditory information (i.e., timbre similarity judgments), and does not require any supervision from the symbolic domain.…”
Section: Evaluation Metricsupporting
confidence: 53%
“…Music deconstruction includes research about melody, music spectral characteristics, the correlation of different types of music genres, and so on. Wang et al [17] did research about playing techniques recognization from the Dizi music spectrum. Yang et al [21] did a quantitative study of vibrato to compare erhu music and violin music.…”
Section: Related Workmentioning
confidence: 99%