2013 IEEE International Conference on Image Processing 2013
DOI: 10.1109/icip.2013.6738931
|View full text |Cite
|
Sign up to set email alerts
|

Logarithmic Spread-Transform Dither Modulation watermarking Based on Perceptual Model

Abstract: Logarithmic Quantization Index Modulation (LQIM) is an important extension of the original quantization-based watermarking method. However, it is well known that it is sensitive to valumetric scaling attack and easy to result in sign error after quantization and attacks. For that, in this paper, we propose a new method, namely Logarithmic Spread-Transform Dither Modulation Based on Perceptual Model (LSTDM-WM). In this regard the host signal is first projected onto a random vector and transformed using a novel … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
18
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 12 publications
(18 citation statements)
references
References 13 publications
0
18
0
Order By: Relevance
“…The experiments were conducted to compare the performance of the proposed scheme and other proposed STDM improvements, termed as STDM-RW [7], STDM-AdpWM [12], STDM-RDMWm [13], and LSTDM-WM [34]. Meanwhile, three kinds of attacks (Gaussian noise with mean zero variance ranging from 0 to 15; JPEG compression, where the JPEG quality factor varies from 20 to 100; and volumetric scaling attacks that can reduce the image intensities as scaling factor varies from 0.1 to 1.5) were used to verify the performance of the proposed models.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…The experiments were conducted to compare the performance of the proposed scheme and other proposed STDM improvements, termed as STDM-RW [7], STDM-AdpWM [12], STDM-RDMWm [13], and LSTDM-WM [34]. Meanwhile, three kinds of attacks (Gaussian noise with mean zero variance ranging from 0 to 15; JPEG compression, where the JPEG quality factor varies from 20 to 100; and volumetric scaling attacks that can reduce the image intensities as scaling factor varies from 0.1 to 1.5) were used to verify the performance of the proposed models.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…The experimental results are demonstrated in Figure 7. From the robustness results, the proposed scheme outperforms STDM-RW [7], STDM-AdpWM [12], STDM-RDMWm [13], and LSTDM-WM [34] schemes. Our proposed scheme has average BER values 3% lower than the STDM-RW and STDM-RDMWm schemes.…”
Section: Experiments Of Robustness With Vsi = 0982mentioning
confidence: 94%
“…RDM achieves a better performance against the FGA attack, with a limitation against the additive noise. A number of solutions have been proposed using perceptual models [13]- [18] based on Watson's model [19] to improve the fidelity and provide robustness to FGA attack. Watson provides a perceptual model for computing the slack associated with each DCT coefficient within an 8×8 block, and those slacks are used to select the projection vector and/or to determine the quantization step size during the embedding and decoding process.…”
Section: Introductionmentioning
confidence: 99%
“…In this Letter, a novel VS-based watermarking method for a monochrome image is proposed, in which the VS model in the discrete cosine transform (DCT) domain is introduced to modulate the perceptual JND model with a new numerical measure. Consequently, the proposed VS-based JND model is used to adjust the quantisation step adaptively for the logarithmic spread transform dither modulation (STDM) watermarking framework [2]. Experiments show the proposed scheme has enhanced robustness against common attacks.…”
mentioning
confidence: 99%
“…VS-based JND model: In the logarithmic STDM watermarking scheme with the perceptual JND model [2], Watson's perceptual DCT-based JND model for luminance adaptation effect serve as the foundation to adjust the quantisation step adaptively. However, the existing JND model only measures the visual effect in the image with an equal attention level and the actual masking estimation would not be complete without VS.…”
mentioning
confidence: 99%