2024
DOI: 10.1007/s43681-023-00412-3
|View full text |Cite
|
Sign up to set email alerts
|

Protecting ownership rights of ML models using watermarking in the light of adversarial attacks

Katarzyna Kapusta,
Lucas Mattioli,
Boussad Addad
et al.

Abstract: In this paper, we present and analyze two novel -and seemingly distant -research trends in Machine Learning: ML watermarking and adversarial patches. First, we show how ML watermarking uses specially crafted inputs to provide a proof of model ownership. Second, we demonstrate how an attacker can craft adversarial samples in order to trigger an abnormal behavior in a model and thus perform an ambiguity attack on ML watermarking. Finally, we describe three countermeasures that could be applied in order to preven… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
references
References 13 publications
(13 reference statements)
0
0
0
Order By: Relevance