2021
DOI: 10.1007/978-981-16-1866-6_55
|View full text |Cite
|
Sign up to set email alerts
|

Linguistic Steganography Based on Automatically Generated Paraphrases Using Recurrent Neural Networks

Abstract: Linguistic steganography is used for hiding information in the multimedia data in a traditional approach. For that, we are taking an image and converting the text into bits like 0's and 1's. In this paper, we are using both image and text dataset. Nowadays, providing secure data with confidentiality and integrity has become a big challenge in the present world. To improve the security of these existing parameters, we have proposed a methodology by referring through various reference papers and existing methodo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 12 publications
0
1
0
Order By: Relevance
“…The candidate pool was constructed based on probability difference instead of greedy sampling, and entropy coding was applied to embed secret information. Furthermore, Yi et al [31] proposed a novel linguistic steganographic method which enables the receiver to collect the tokens of the specific positions to directly constitute the secret message in a seemingly-natural steganographic text generated by the off-the-shelf BERT model equipped with Gibbs sampling. Deepthi et al [32] used support vector machine (SVM), recurrent neural network (RNN), and CNN to provide secure data with confidentiality and integrity.…”
Section: Related Workmentioning
confidence: 99%
“…The candidate pool was constructed based on probability difference instead of greedy sampling, and entropy coding was applied to embed secret information. Furthermore, Yi et al [31] proposed a novel linguistic steganographic method which enables the receiver to collect the tokens of the specific positions to directly constitute the secret message in a seemingly-natural steganographic text generated by the off-the-shelf BERT model equipped with Gibbs sampling. Deepthi et al [32] used support vector machine (SVM), recurrent neural network (RNN), and CNN to provide secure data with confidentiality and integrity.…”
Section: Related Workmentioning
confidence: 99%
“…To address the issue of semantical irrelevance, some generative steganography models using statistical language models emerged, such as using n-gram models [ 13 ] or Markov chains [ 14 ] to model semantical features. Due to the difficulty of semantical modeling, some statistical models are applied to specific genres such as short jokes [ 15 ], emails [ 16 ], and poetry [ 17 ].…”
Section: Introductionmentioning
confidence: 99%