2021
DOI: 10.5334/tismir.85
|View full text |Cite
|
Sign up to set email alerts
|

On End-to-End White-Box Adversarial Attacks in Music Information Retrieval

Abstract: Small adversarial perturbations of input data can drastically change the performance of machine learning systems, thereby challenging their validity. We compare several adversarial attacks targeting an instrument classifier, where for the first time in Music Information Retrieval (MIR) the perturbations are computed directly on the waveform. The attacks can reduce the accuracy of the classifier significantly, while at the same time keeping perturbations almost imperceptible. Furthermore, we show the potential … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(16 citation statements)
references
References 22 publications
2
14
0
Order By: Relevance
“…In this work, we therefore try to apply a method for defending against adversaries that does not require full knowledge of the specific attack; the one limitation we work with, however, is that the attack needs to exploit the hubness phenomenon. Note also that the real-world recommender that was attacked in previous work [7] would not permit adversarial training as a defence method due to the nature of the system (cf. section 4.1), as no training in the sense of learning model-parameters is performed.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…In this work, we therefore try to apply a method for defending against adversaries that does not require full knowledge of the specific attack; the one limitation we work with, however, is that the attack needs to exploit the hubness phenomenon. Note also that the real-world recommender that was attacked in previous work [7] would not permit adversarial training as a defence method due to the nature of the system (cf. section 4.1), as no training in the sense of learning model-parameters is performed.…”
Section: Related Workmentioning
confidence: 99%
“…In this work, we build upon two approaches that were previously published; first, we use the attack scenario proposed in [7] in order to provide the setting for our defence. Secondly, we apply the hubness-reduction method introduced in [9] and investigate its suitability as a defence method against adversarial examples.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations