2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR) 2019
DOI: 10.1109/mipr.2019.00073
|View full text |Cite
|
Sign up to set email alerts
|

Visual Decoding of Hidden Watermark in Trained Deep Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(15 citation statements)
references
References 4 publications
0
15
0
Order By: Relevance
“…To keep these trigger samples as close as possible to the original samples, an autoencoder is used whose discriminator is trained to distinguish between training and trigger samples with the watermarks. Sakazawa et al (2019) proposed a cumulative and visual decoding of watermarks in NNs, such that patterns embedded into the training data can become visual for an authentication by a third party.…”
Section: Categorizing Watermarking Methodsmentioning
confidence: 99%
“…To keep these trigger samples as close as possible to the original samples, an autoencoder is used whose discriminator is trained to distinguish between training and trigger samples with the watermarks. Sakazawa et al (2019) proposed a cumulative and visual decoding of watermarks in NNs, such that patterns embedded into the training data can become visual for an authentication by a third party.…”
Section: Categorizing Watermarking Methodsmentioning
confidence: 99%
“…However, both of them were not tested on ImageNet, and it is not feasible to train an ImageNet model from scratch. The other model watermarking methods such as [2][3][4][5][6][7][8][9] focus on ownership verification only when a stolen model is in question. Therefore, the embedded watermark is independent of model accuracy.…”
Section: Functional Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…Another study in [3] implanted a backdoor in a model so that a watermark can be triggered through the backdoor. Generally, in black-box approaches, a special set of training examples is used so that watermarks are extracted from the inference of a model [12,27,33,44]. Li et al pointed out that backdoor attack-based methods can be defeated by existing backdoor defenses (e.g.…”
Section: Related Work 21 Dnn Model Watermarkingmentioning
confidence: 99%
“…In this paper, we focus on ownership verification of DNN models. Researchers have proposed various model watermarking methods [3,12,27,31,33,41,44]. However, most of the existing DNN watermarking methods are not robust against piracy attacks as described in [25,43].…”
Section: Introductionmentioning
confidence: 99%