TENCON 2005 - 2005 IEEE Region 10 Conference 2005
DOI: 10.1109/tencon.2005.301060
|View full text |Cite
|
Sign up to set email alerts
|

Watermarking of Still Images in Wavelet Domain based on Entropy Masking Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(8 citation statements)
references
References 7 publications
0
8
0
Order By: Relevance
“…In 2004, Kim et al [3] proposed entropy masking based on DCT domain. In 2005, Akhbari et al [4] offered entropy masking based on the wavelet transform.…”
Section: Literature Surveymentioning
confidence: 99%
“…In 2004, Kim et al [3] proposed entropy masking based on DCT domain. In 2005, Akhbari et al [4] offered entropy masking based on the wavelet transform.…”
Section: Literature Surveymentioning
confidence: 99%
“…Embedding during compression generally hide information by modifying the DCT quantization coefficients [34], or DWT quantization coefficients [35][36][37][38]. However there are error accumulation and low-capacity with this strategy.…”
Section: ) Between Framesmentioning
confidence: 99%
“…However there are error accumulation and low-capacity with this strategy. When it comes to embedding after compression, the efficiency is increased without encoding and decoding process during embedding and extraction, such as the MP3 steganography algorithm based on Huffman coding [7] and digital watermark algorithm for MIDI [36] and so on.…”
Section: ) Between Framesmentioning
confidence: 99%
“…[19] used entropy the HVS characteristics to design a non-blind DWT watermarking of gray images and demonstrated their robustness against attacks. Akhbari, B. and Ghaemmaghami, S. [20] proposed an image adaptive digital watermarking algorithm in wavelet transform domain. The method exploits human visual system (HVS) characteristics and entropy masking to determine image adaptive thresholds for selection of perceptually significant coefficients.…”
Section: Literature Reviewmentioning
confidence: 99%