2020
DOI: 10.48550/arxiv.2005.14125
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Notes on ridge functions and neural networks

Abstract: To the Memory of My Parents PrefaceThese notes are about ridge functions. Recent years have witnessed a flurry of interest in these functions. Ridge functions appear in various fields and under various guises. They appear in fields as diverse as partial differential equations (where they are called plane waves), computerized tomography and statistics. These functions are also the underpinnings of many central models in neural networks.We are interested in ridge functions from the point of view of approximation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
1
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 83 publications
0
1
0
Order By: Relevance
“…There are many standard examples of ridge function losses in machine learning, e.g., least-squares regression, logistic regression, one hidden layer neural networks, etc. [15], [16]. The natural model for Principal Component Regression (PCR) is another such example [17].…”
Section: Introductionmentioning
confidence: 99%
“…There are many standard examples of ridge function losses in machine learning, e.g., least-squares regression, logistic regression, one hidden layer neural networks, etc. [15], [16]. The natural model for Principal Component Regression (PCR) is another such example [17].…”
Section: Introductionmentioning
confidence: 99%
“…The research on this subject was carried out in two directions. In the first direction, the analysis was concentrated on approximative versions of Kolmogorov's theorem and similar results on feedforward neural networks (see, e.g., [4,9,11,16,17,18,20]). In the second direction, the precise form of the Kolmogorov superposition theorem and its relationship to neural networks were studied.…”
Section: Introductionmentioning
confidence: 99%