2023
DOI: 10.11591/eei.v12i1.4229
|View full text |Cite
|
Sign up to set email alerts
|

Signal multiple encodings by using autoencoder deep learning

Abstract: Encryption is a substantial phase in information security. It permits only approved persons to get private information. This study suggests a signal multi-encryptions system (SMES) technique for coding and decoding signals created by a deep autoencoder network (DAN). The DAN of four layers is employed for a coding package of signals multiple times before decoding or restructuring the original signals again. The suggested SMES offers a high level of security as it can produce and exploit multiple encryptions fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…It also shows outperformances compared to other models or networks for the same training and testing samples used in Table 1, the comparisons are detailed in Table 2. That is, the suggested algorithm outperformed previous deep learning models of the stacked autoencoder (SA) [23], deep autoencoder network (DAN) [24], and autoencoder deep learning (ADL) [25] in terms of its flexibility, training time, mean square error (MSE) and its ability to recognize parents. Regarding the flexibility, the SDDL can be enlarged or reduced without the requiring of re-train again.…”
Section: Sddl Performancesmentioning
confidence: 96%
“…It also shows outperformances compared to other models or networks for the same training and testing samples used in Table 1, the comparisons are detailed in Table 2. That is, the suggested algorithm outperformed previous deep learning models of the stacked autoencoder (SA) [23], deep autoencoder network (DAN) [24], and autoencoder deep learning (ADL) [25] in terms of its flexibility, training time, mean square error (MSE) and its ability to recognize parents. Regarding the flexibility, the SDDL can be enlarged or reduced without the requiring of re-train again.…”
Section: Sddl Performancesmentioning
confidence: 96%
“…The DDLN consistently outperforms these models across various aspects, as detailed in Table II. Notably, it excels previous deep learning models of the Stacked Autoencoder [26], Deep Autoencoder Network [27], and Autoencoder Deep Learning [28] in terms of flexibility, training time, Mean Square Error (MSE), and the ability to identify parents. The DDLN's flexibility stands out as it can be adjusted in size without necessitating re-training, unlike other networks that require specific parameters such as hidden layer counts and neurons.…”
Section: B Ddln Performancesmentioning
confidence: 99%
“…Many academic subjects including image recognitions, analyses and classifications have recently benefited from the Deep Learning (DL) techniques as in [8] [9][10] [11] [12][13] [14] [15] [16] [17] [18]. There are two types of methods that can be applied to the SL recognition: the first type is by using sensors and the second type is based on images [19].…”
Section: Introductionmentioning
confidence: 99%