2021
DOI: 10.1155/2021/8892636
|View full text |Cite
|
Sign up to set email alerts
|

An Improvement of Stochastic Gradient Descent Approach for Mean-Variance Portfolio Optimization Problem

Abstract: In this paper, the current variant technique of the stochastic gradient descent (SGD) approach, namely, the adaptive moment estimation (Adam) approach, is improved by adding the standard error in the updating rule. The aim is to fasten the convergence rate of the Adam algorithm. This improvement is termed as Adam with standard error (AdamSE) algorithm. On the other hand, the mean-variance portfolio optimization model is formulated from the historical data of the rate of return of the S&P 500 stock, 10-year… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 6 publications
0
4
0
1
Order By: Relevance
“…Beberapa optimasi yang bisa digunakan adalah AdaDelta, Nesterov, momentum, RMSProp, AdaGrad, Adam, Nadam GD algorithms dan AdaMax [12]. Metode estimasi Adam dan SGD sangat fleksibel dan memiliki tingkat konvergensi yang signifikan dan lebih cepat dibandingkan dengan metode lain, seperti yang dijelaskan Su dan Kek [13]. Pada penelitian ini, diperlihatkan tingkat akurasi Model RNN dengan menggunakan optimasi Adam dan SGD dan variasi fungsi aktivasi reLU, sigmoid, tanh dan gaussian.…”
Section: Pendahuluanunclassified
“…Beberapa optimasi yang bisa digunakan adalah AdaDelta, Nesterov, momentum, RMSProp, AdaGrad, Adam, Nadam GD algorithms dan AdaMax [12]. Metode estimasi Adam dan SGD sangat fleksibel dan memiliki tingkat konvergensi yang signifikan dan lebih cepat dibandingkan dengan metode lain, seperti yang dijelaskan Su dan Kek [13]. Pada penelitian ini, diperlihatkan tingkat akurasi Model RNN dengan menggunakan optimasi Adam dan SGD dan variasi fungsi aktivasi reLU, sigmoid, tanh dan gaussian.…”
Section: Pendahuluanunclassified
“…The gradient-descend methods do not need an analytical solution of the optimizing problem, it approximates the solution iteratively. One of the efficient optimizers is ADAM (ADAptive Moment) method (Su & Kek, 2021), the innovations of this method are the windowed estimations of the first moment and the second moment. These two moments are maintained during the whole training process and are updated in each step, so as to drive the parameter vector away from saddle points to a local minimum.…”
Section: Training Of T-arx Modelmentioning
confidence: 99%
“…The common algorithms used to calculate the gradient in training networks are Stochastic Gradient Descent with Momentum (SGDM) and Adaptive Moment Estimation (ADAM) [37,38]. In this study, we used different optimizers, including SGDM and Adaptive Moment Estimation (ADAM), to train the network and later compared the performance of the two.…”
Section: Stochastic Gradient Descentmentioning
confidence: 99%