2020
DOI: 10.1103/physrevlett.125.168301
|View full text |Cite
|
Sign up to set email alerts
|

Space of Functions Computed by Deep-Layered Machines

Abstract: We study the space of functions computed by random-layered machines, including deep neural networks and Boolean circuits. Investigating the distribution of Boolean functions computed on the recurrent and layer-dependent architectures, we find that it is the same in both models. Depending on the initial conditions and computing elements used, we characterize the space of functions computed at the large depth limit and show that the macroscopic entropy of Boolean functions is either monotonically increasing or d… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 29 publications
(38 reference statements)
2
7
0
Order By: Relevance
“…Nonetheless, the structure of the mean-field equations can give rise to the same Gaussian processes kernel in the limit of infinite width for both DNNs and RNNs if the readout in the RNN is taken from a single time step. This finding holds for single inputs, as pointed out in [40], as well as input sequences. Furthermore, for a point-symmetric activation function [40], there is no observable difference between DNNs and RNNs on the mean-field level if the biases are uncorrelated in time and the input is only supplied in the first time step.…”
Section: Introductionsupporting
confidence: 69%
See 3 more Smart Citations
“…Nonetheless, the structure of the mean-field equations can give rise to the same Gaussian processes kernel in the limit of infinite width for both DNNs and RNNs if the readout in the RNN is taken from a single time step. This finding holds for single inputs, as pointed out in [40], as well as input sequences. Furthermore, for a point-symmetric activation function [40], there is no observable difference between DNNs and RNNs on the mean-field level if the biases are uncorrelated in time and the input is only supplied in the first time step.…”
Section: Introductionsupporting
confidence: 69%
“…This finding holds for single inputs, as pointed out in [40], as well as input sequences. Furthermore, for a point-symmetric activation function [40], there is no observable difference between DNNs and RNNs on the mean-field level if the biases are uncorrelated in time and the input is only supplied in the first time step. Going beyond the leading order, we compute the next-to-leadingorder corrections for both DNNs and RNNs.…”
Section: Introductionsupporting
confidence: 69%
See 2 more Smart Citations
“…This could give theoretical backing to computational insights into the robustness and evolvability of circuits [29]. When the number of composition levels is large, this could shed light on the space of functions in some types of neural networks [30].…”
Section: Number Of Permitted Logicsmentioning
confidence: 99%