2020
DOI: 10.3934/fods.2020008
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical approximations for data reduction and learning at multiple scales

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…Depending on the available data, objective and the complexity of the problem, available computational expenditure and desired level of accuracy, various machine learning models, like the simplistic multiple linear regression, to the more complex artificial neural networks are available to choose from. Moreover, even for a machine learning model of choice, one would need to tune its hyperparameters that in turn affect their accuracy and efficiency drastically [21,22]. For example, for a neural network, the number of layers and neurons in each layer can be adjusted to affect its accuracy and efficiency [23].…”
Section: Selection Of Suitable Machine Learning Modelmentioning
confidence: 99%
“…Depending on the available data, objective and the complexity of the problem, available computational expenditure and desired level of accuracy, various machine learning models, like the simplistic multiple linear regression, to the more complex artificial neural networks are available to choose from. Moreover, even for a machine learning model of choice, one would need to tune its hyperparameters that in turn affect their accuracy and efficiency drastically [21,22]. For example, for a neural network, the number of layers and neurons in each layer can be adjusted to affect its accuracy and efficiency [23].…”
Section: Selection Of Suitable Machine Learning Modelmentioning
confidence: 99%
“…Remark 1 From [44,2], we know that the numerical rank of the Gaussian Kernel increases monotonically with s (mainly due to the narrower support at higher scales), hence after sufficiently high s(≥ ω), || n i=1 α i ψ s i || 2 2 > 0 (= 0 only with α = 0), making the Gramian matrix K(x i , x j ) n i,j=1 numerically positive definite.…”
Section: Multiscale Kernelmentioning
confidence: 99%
“…Here, we have chosen to put a tolerance on the inner product as a termination criterion because it ensures that our basis remains well conditioned for ordinary least square computation, and also the reduction of MSE with the addition on b j s is lower bounded. This is crucial for our algorithm as kernel functions which constitute our basis are highly redundant and ill-conditioned (specially at the initial scales [44]). This is fundamentally different than the traditional forward greedy methods like in [12], which keeps on choosing function from a dictionary that is most correlated with the current residual (without any additional checks), and puts a tolerance on the residual as a termination criterion.…”
Section: Multiscale Algorithmmentioning
confidence: 99%
“…Finally, elevation change time series will be combined with models enabling estimation of mass change time series. 70,71 By deriving time series of elevation and mass changes with error estimates to decadal changes of an entire ice sheet, the resulting tool will provide much-needed flexibility for intercomparisons, both between observations and between models and observations.…”
Section: Surface Elevation Reconstruction and Change Detectionmentioning
confidence: 99%
“…Tools to interpolate the irregularly distributed observations into user‐defined meshes or grids, to estimate ice elevation and thickness change, and to perform spatial interpolation will be incorporated into the workflow. Finally, elevation change time series will be combined with models enabling estimation of mass change time series 70,71 . By deriving time series of elevation and mass changes with error estimates to decadal changes of an entire ice sheet, the resulting tool will provide much‐needed flexibility for intercomparisons, both between observations and between models and observations.…”
Section: Community Codesmentioning
confidence: 99%