2013
DOI: 10.1103/physreve.87.032811
|View full text |Cite
|
Sign up to set email alerts
|

Two-layer tree-connected feed-forward neural network model for neural cryptography

Abstract: Neural synchronization by means of mutual learning provides an avenue to design public key exchange protocols, bringing about what is known as neural cryptography. Two identically structured neural networks learn from each other and reach full synchronization eventually. The full synchronization enables two networks to have the same weight, which can be used as a secret key for many subsequent cryptographic purposes. It is striking to observe that after the first decade of neural cryptography, the tree parity … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 29 publications
0
10
0
Order By: Relevance
“…ANN model applies a learning algorithm for nonlinear statistical data modeling by mimicking the way nerve cells work in the human brain, and the model is particularly efficient in implicitly estimating complex nonlinear relationships between input features and target predictions 86 . We choose a single layer feed-forward neural network consisting of an input layer, hidden layer, and output layer [87][88][89] . The input is the pore structural features, and effective reaction rates from pore-scale simulations are the target predictions.…”
Section: Machine Learning Methodsmentioning
confidence: 99%
“…ANN model applies a learning algorithm for nonlinear statistical data modeling by mimicking the way nerve cells work in the human brain, and the model is particularly efficient in implicitly estimating complex nonlinear relationships between input features and target predictions 86 . We choose a single layer feed-forward neural network consisting of an input layer, hidden layer, and output layer [87][88][89] . The input is the pore structural features, and effective reaction rates from pore-scale simulations are the target predictions.…”
Section: Machine Learning Methodsmentioning
confidence: 99%
“…With respect to the proposed algorithms for the design of the TPM network, Lei et al [13] developed a two-layer, prepowered TPM network model. A fast synchronization can be achieved by increasing the minimum value of the internal representations Hamming distance and by reducing the probability of a step that does not modify networks weights.…”
Section: Related Workmentioning
confidence: 99%
“…As research in this field progressed, a general consensus started emerging that, till now, the tree parity machine (TPM) network with hidden unit K=3 is the model that is most suitable for a neural protocol [28]. Thus, it becomes important to find more variety of neural network architectures and synchronizing mechanisms that provide enhanced security to different forms of attacks.…”
Section: Other Recent Studiesmentioning
confidence: 99%
“…Thus, in 2013, a two-layer tree-connected feed-forward neural network (TTFNN) model was proposed. This model utilized the concept that two communicating partners are capable of exchanging a vector with multiple bits in each time step [28]. In this work, feasible conditions and heuristic rules that would make the neural synchronization based protocol successful against common attacks were obtained.…”
Section: Other Recent Studiesmentioning
confidence: 99%