2022
DOI: 10.48550/arxiv.2210.17020
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Law of Data Separation in Deep Learning

Abstract: Multilayer neural networks have achieved superhuman performance in many artificial intelligence applications. However, their black-box nature obscures the underlying mechanism for transforming input data into labels throughout all layers, thus hindering architecture design for new tasks and interpretation for high-stakes decision makings. We addressed this problem by introducing a precise law that governs how real-world deep neural networks separate data according to their class membership from the bottom laye… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 23 publications
0
1
0
Order By: Relevance
“…One of the commonly adopted metrics for NC1 is the normalized within-class covariance pyan et al, 2020;Zhu et al, 2021;Tirer & Bruna, 2022). The term is commonly referred to as Separation Fuzziness or simply Fuzziness in the related literature (He & Su, 2022), and is inherently related to the fisher discriminant ratio (Zarka et al, 2020).…”
Section: Previous Collapse Metricsmentioning
confidence: 99%
“…One of the commonly adopted metrics for NC1 is the normalized within-class covariance pyan et al, 2020;Zhu et al, 2021;Tirer & Bruna, 2022). The term is commonly referred to as Separation Fuzziness or simply Fuzziness in the related literature (He & Su, 2022), and is inherently related to the fisher discriminant ratio (Zarka et al, 2020).…”
Section: Previous Collapse Metricsmentioning
confidence: 99%