2019
DOI: 10.48550/arxiv.1907.06679
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards Near-imperceptible Steganographic Text

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…Then they use a full binary tree (FLC) and a Huffman tree (VLC) to encode the conditional probability distribution of each word, and output corresponding words according to the information needs to be hidden, so as to realize the embedding of hidden information in the sentence generation precess. After that, Dai et al [19] and Ziegler et al [20] further improve the statistical language model and the coding method of conditional probability distribution, which can further optimize the conditional probability distribution of each word in the generated steganographic sentences.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Then they use a full binary tree (FLC) and a Huffman tree (VLC) to encode the conditional probability distribution of each word, and output corresponding words according to the information needs to be hidden, so as to realize the embedding of hidden information in the sentence generation precess. After that, Dai et al [19] and Ziegler et al [20] further improve the statistical language model and the coding method of conditional probability distribution, which can further optimize the conditional probability distribution of each word in the generated steganographic sentences.…”
Section: Related Workmentioning
confidence: 99%
“…The previous steganographic text generative model mainly encodes this conditional probability distribution of each word and to embed the secret information [9], [17], [19], [20]. But as we mentioned before, their common problem is that, with the increase of embedding rate, the model will gradually select words with lower conditional probability, thus reducing the quality of generated text.…”
Section: Sentence Generationmentioning
confidence: 99%