2018
DOI: 10.1016/j.neucom.2018.07.007
|View full text |Cite
|
Sign up to set email alerts
|

A novel one-layer recurrent neural network for the l1-regularized least square problem

Abstract: The l1 regularized least square problem, or the lasso, is a non-smooth convex minimization which is widelyused in diverse fields. However, solving such a minimization is not straightforward since it is not differentiable. In this paper, an equivalent smooth minimization with box constraints is obtained, and it is proved to be equivalent to the lasso problem. Accordingly, an efficient recurrent neural network is developed which guarantees to globally converge to the solution of the lasso. Further, it is investi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
10
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(12 citation statements)
references
References 35 publications
2
10
0
Order By: Relevance
“…The decrease in the mean square error value also proves the working of feed-back gradient descent optimization mechanism of algorithm, through which the parameters are made to fit into the model correctly by improving the weight of the input data 31 . It also proves the convergence of the data reliable to the robust model for the given dataset for the constant parameters 32 .…”
Section: Mean Square Errorsupporting
confidence: 59%
“…The decrease in the mean square error value also proves the working of feed-back gradient descent optimization mechanism of algorithm, through which the parameters are made to fit into the model correctly by improving the weight of the input data 31 . It also proves the convergence of the data reliable to the robust model for the given dataset for the constant parameters 32 .…”
Section: Mean Square Errorsupporting
confidence: 59%
“…The latter observation describes a form of stability, which coincides with the conclusion presented in Theorem 2. In addition, it is quite expected in compressive sampling that augmenting the number of measurements m generally improves the recovering quality (progressing from ill‐posed and under‐determined systems to well‐posed ones) 20,21,35–36 . This fact is also noticed in the different plots of Figures 2 and 3.…”
Section: Simulation and Discussionmentioning
confidence: 64%
“…In the last decades, numerous optimization issues that arise in various engineering applications require real‐time processing 8,20,21 . Classical digital techniques, as gradient projection‐based algorithms or subgradient strategies and so forth, are mostly not convenient from the standpoint of computational time or resource allocation in computer networks 16,17,22–23 .…”
Section: Introductionmentioning
confidence: 99%
“…is a (3p + 3)-dimensional column vector, w s ≥ 0,(s � 1, 2, 3), and 3 s�1 w s � 1. To solve problem ( 20)-( 23) we first use the following proposition [72]. Let u, v ∈ R n be auxiliary variables such that…”
Section: An Optimization Modelmentioning
confidence: 99%