2013
DOI: 10.1007/s10589-013-9614-z
|View full text |Cite
|
Sign up to set email alerts
|

Approximate dynamic programming for stochastic N-stage optimization with application to optimal consumption under uncertainty

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
3

Relationship

3
3

Authors

Journals

citations
Cited by 24 publications
(17 citation statements)
references
References 56 publications
0
17
0
Order By: Relevance
“…Classic results on approximating properties of popular models of learning from data and their use in ADP are reported in the literature. 10,11,[36][37][38][39] In the following, we focus the analysis only on the term ê J t ( * t ), called "estimation error" in the literature, 40 as it is the one that depends on the sampling points Σ L t . Formally, it quantifies the error that we commit by estimating * t by minimizing Equation (3) instead of the integral error e J t ( ) in Equation (7).…”
Section: Application Of Lattice Point Set Sampling To Approximate Dynmentioning
confidence: 99%
See 2 more Smart Citations
“…Classic results on approximating properties of popular models of learning from data and their use in ADP are reported in the literature. 10,11,[36][37][38][39] In the following, we focus the analysis only on the term ê J t ( * t ), called "estimation error" in the literature, 40 as it is the one that depends on the sampling points Σ L t . Formally, it quantifies the error that we commit by estimating * t by minimizing Equation (3) instead of the integral error e J t ( ) in Equation (7).…”
Section: Application Of Lattice Point Set Sampling To Approximate Dynmentioning
confidence: 99%
“…[3][4][5] In the continuous-state case, obtaining the estimation of the cost-to-go relies on two main elements: (i) a class of models to approximate the cost-to-go functions and (ii) a suitable sampling of the state space. Concerning (i), many popular models of learning from data have been used in the literature, like splines, 6,7 polynomial approximators, 8 neural networks, [9][10][11] and local kernel models. 12 In this paper, we focus on (ii), ie, sampling, which is a critical part of the ADP algorithm in terms of accuracy and computational effort.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For instance, the Least Absolute Shrinkage and Selection Operator (LASSO) [49] was applied in [28] to consensus problems, and in [23] to Model Predictive Control (MPC). Applications of machine-learning techniques to control can be found, e.g., in [48], and in the series of papers [21,22,29], where Least Squares Support Vector Machines (LS-SVMs) and one-hidden-layer perceptron neural networks, respectively, were applied to find suboptimal solutions to optimal control problems. In [36], spectral graph theory methodsalready exploited successfully in machine-learning problems [5] -were applied to the control of multi-agent dynamical systems.…”
Section: Application Of Machine-learning Techniques To Optimization/omentioning
confidence: 99%
“…Moreover, when shifting the sliding window one unit to the right (hence, inserting a new example, and removing the oldest one), a recursive approach could be used to generate the optimal solution of the resulting optimization problem, starting from the one of the previous problem. This approach -called "downdating" in the literature (as opposed to "updating") 22. As shown in Section 7, for every k = 1, 2, .…”
Section: The Auxiliary |Gmentioning
confidence: 99%