ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2021
DOI: 10.1109/icassp39728.2021.9414844
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attacks on Audio Source Separation

Abstract: Despite the excellent performance of neural-network-based audio source separation methods and their wide range of applications, their robustness against intentional attacks has been largely neglected. In this work, we reformulate various adversarial attack methods for the audio source separation problem and intensively investigate them under different attack conditions and target models. We further propose a simple yet effective regularization method to obtain imperceptible adversarial noise while maximizing t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…The aim is to develop a method N that creates a noise đť‘› to be applied to the input đť‘Ą in order to transform đť‘Ą into đť‘Ą ′ so that it satisfies optimal performance in system S and degraded performance in system A. [22]. Expanding on previous work in the area of audio scrambling [13], we suggest increased research to explore adversarial AI components that check whether speech content and individual identity could be reconstructed from degraded or scrambled audio.…”
Section: Research Directions and Challenges 31 Reliability: Data Degr...mentioning
confidence: 99%
“…The aim is to develop a method N that creates a noise đť‘› to be applied to the input đť‘Ą in order to transform đť‘Ą into đť‘Ą ′ so that it satisfies optimal performance in system S and degraded performance in system A. [22]. Expanding on previous work in the area of audio scrambling [13], we suggest increased research to explore adversarial AI components that check whether speech content and individual identity could be reconstructed from degraded or scrambled audio.…”
Section: Research Directions and Challenges 31 Reliability: Data Degr...mentioning
confidence: 99%
“…Kos et al [7] proposed an adversarial example to attack the image reconstruction model, which is the first work related to the deep generation model. In the audio domain, Takahashi et al proposed an adversarial example to attack an audio source separation model that can help protect songs' copyrights from the abuse of separated signals [8]. Huang et al proposed work of attacks on voice conversion systems [9], which can protect a speaker's private information.…”
Section: Introductionmentioning
confidence: 99%