2021
DOI: 10.1007/978-3-030-77876-7_32
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning Algorithms of Relaxation Subgradient Method with Space Extension

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(10 citation statements)
references
References 16 publications
0
8
0
Order By: Relevance
“…Therefore, regardless of which of equations ( 14) or ( 19) is used to transform the matrices, the result ( 29) is valid for a sequence of vectors composed of g k or y k , depending on which one of them is used to transform the matrices. Let us show that for Algorithm 1 with fixed values of parameter α k , estimates similar to (29) are valid, and we obtain expressions for the admissible parameters α k in ( 14), (19), at which the values (∆ k , A k ∆ k ) do not increase.…”
Section: Machine Learning Algorithm With Space Dilation For Solving S...mentioning
confidence: 92%
See 4 more Smart Citations
“…Therefore, regardless of which of equations ( 14) or ( 19) is used to transform the matrices, the result ( 29) is valid for a sequence of vectors composed of g k or y k , depending on which one of them is used to transform the matrices. Let us show that for Algorithm 1 with fixed values of parameter α k , estimates similar to (29) are valid, and we obtain expressions for the admissible parameters α k in ( 14), (19), at which the values (∆ k , A k ∆ k ) do not increase.…”
Section: Machine Learning Algorithm With Space Dilation For Solving S...mentioning
confidence: 92%
“…In this study, which is an extension of previous work [1], a problem of minimizing a convex, not necessarily differentiable function f (x), x ∈ R n (where R n is a finitedimensional Euclidean space) is discovered. Such a problem is quite common in the field of machine learning (ML), where optimization methods, in particular, gradient descent, are widely used to minimize the loss function during training stage.…”
Section: Introductionmentioning
confidence: 95%
See 3 more Smart Citations