2021
DOI: 10.1016/j.jisa.2021.103004
|View full text |Cite
|
Sign up to set email alerts
|

Spread-Transform Dither Modulation Watermarking of Deep Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
15
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(15 citation statements)
references
References 14 publications
0
15
0
Order By: Relevance
“…Table 5 shows the effect of attacks on performance when ascending pruning attacks and random pruning attacks are performed with increasing pruning rates in previous studies [ 6 , 13 , 15 ]. In the ascending pruning attack, the top R % parameters are cut-off according to their absolute values in ascending, while in the random pruning attack, R In the evaluation of multilayer perceptron (MLP) and VGG, the bit error rate (BER) is zero up to a pruning rate of 0.9; in the evaluation of Wide ResNet (WRN), the BER is zero up to a pruning rate of 0.6 or 0.65.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Table 5 shows the effect of attacks on performance when ascending pruning attacks and random pruning attacks are performed with increasing pruning rates in previous studies [ 6 , 13 , 15 ]. In the ascending pruning attack, the top R % parameters are cut-off according to their absolute values in ascending, while in the random pruning attack, R In the evaluation of multilayer perceptron (MLP) and VGG, the bit error rate (BER) is zero up to a pruning rate of 0.9; in the evaluation of Wide ResNet (WRN), the BER is zero up to a pruning rate of 0.6 or 0.65.…”
Section: Resultsmentioning
confidence: 99%
“…Uchida et al showed experimentally that the watermark does not disappear even after a pruning attack that prunes 65% of the parameters [ 6 ]. Another study achieved robustness against 60% pruning [ 15 ]. This study adopted the idea of spread transform dither modulation (ST-DM) watermarking by extending the conventional spread spectrum (SS)-based DNN watermarking.…”
Section: Introductionmentioning
confidence: 99%
“…They regularize the selected weights to some secret values by adding a regularization loss during training. Li et al [24] improve the method in [35] by adding a Spread-Transform Dither Modulation (ST-DM)-like regularization term, which can reduce the impact of watermarking on the accuracy of the DNN model on normal inputs. Chen et al [3] improve [35] by implementing a watermarking system with anti-collision capabilities.…”
Section: Related Workmentioning
confidence: 99%
“…Because the marked signal may be intentionally attacked prior to watermark extraction, the watermark embedding procedure Corresponding author: Hanzhou Wu (contact email: h.wu.phd@ieee.org) is required to be robust against attacks for reliable ownership identification. A straightforward idea to protect the IP of DNN models is to directly extend advanced watermarking strategies suited to media signals to the DNN models since media watermarking has been widely studied in the past two decades [18]- [20]. However, unlike media signals that are static data, DNN models are functional.…”
Section: Introductionmentioning
confidence: 99%