2020
DOI: 10.1017/s0956792520000224
|View full text |Cite
|
Sign up to set email alerts
|

A multi-level procedure for enhancing accuracy of machine learning algorithms

Abstract: We propose a multi-level method to increase the accuracy of machine learning algorithms for approximating observables in scientific computing, particularly those that arise in systems modelled by differential equations. The algorithm relies on judiciously combining a large number of computationally cheap training data on coarse resolutions with a few expensive training samples on fine grid resolutions. Theoretical arguments for lowering the generalisation error, based on reducing the variance of the underlying… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
30
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 31 publications
(31 citation statements)
references
References 28 publications
1
30
0
Order By: Relevance
“…This could in turn benefit the accuracy and the convergence of the training process (see, e.g. [ 46 ] for the development of multilevel DNN training algorithms, albeit in another class of applications). In the context of multiscale networks, identifying the appropriate copy-number scalings that give rise to reduced models with simpler dynamics is a highly specialised task requiring careful theoretical analysis [ 16 ].…”
Section: Discussionmentioning
confidence: 99%
“…This could in turn benefit the accuracy and the convergence of the training process (see, e.g. [ 46 ] for the development of multilevel DNN training algorithms, albeit in another class of applications). In the context of multiscale networks, identifying the appropriate copy-number scalings that give rise to reduced models with simpler dynamics is a highly specialised task requiring careful theoretical analysis [ 16 ].…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, building a surrogate model can be especially useful in uncertainty quantification [69]. NNs can also aid classical methods in solving goal-oriented tasks [15,44]. In addition to the aforementioned research directions, further work has been done on fusing NNs with classical numerical methods to assist, for example, in model-order reduction [40,61].…”
Section: Related Workmentioning
confidence: 99%
“…In contrast to data science applications, in this context, one can control the training data's accuracy and combine many relatively cheap low-fidelity samples with a few high-fidelity samples. The theoretical arguments and numerical evidence in [5] show that such a multi-level procedure can lower the generalisation error considerably. Becker et al [1] tackle optimal stopping problems for financial derivatives such as American or Bermudan options.…”
mentioning
confidence: 95%