2015 23rd European Signal Processing Conference (EUSIPCO) 2015
DOI: 10.1109/eusipco.2015.7362679
|View full text |Cite
|
Sign up to set email alerts
|

A novel line search method for nonsmooth optimization problems

Abstract: In this paper, we propose a novel exact/successive line search method for stepsize calculation in iterative algorithms for nonsmooth optimization problems. The proposed approach is to perform line search over a properly constructed differentiable function based on the original nonsmooth objective function, and it outperforms state-of-the-art techniques from the perspective of convergence speed, computational complexity and signaling burden. When applied to LASSO, the proposed exact line search is shown, either… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2016
2016
2018
2018

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…To achieve this goal, we exploit the fact that the objective function in (5a) of problem (5) is convex in each variable x n , ∀n ∈ N . Employing the Jacobi algorithm [25][26][27], we can formulate the approximate function of the original objective function f (x) = ||y − Hx|| 2 , whose convergence to the optimal solution of the original problem is ensured with an appropriate step-size selection. Let x t denote the approximate solution to the problem (5) obtained in the (t − 1)th iteration.…”
Section: B Parallel Low-complexity Iterative Algorithmmentioning
confidence: 99%
See 2 more Smart Citations
“…To achieve this goal, we exploit the fact that the objective function in (5a) of problem (5) is convex in each variable x n , ∀n ∈ N . Employing the Jacobi algorithm [25][26][27], we can formulate the approximate function of the original objective function f (x) = ||y − Hx|| 2 , whose convergence to the optimal solution of the original problem is ensured with an appropriate step-size selection. Let x t denote the approximate solution to the problem (5) obtained in the (t − 1)th iteration.…”
Section: B Parallel Low-complexity Iterative Algorithmmentioning
confidence: 99%
“…The vectorx t − x t represents a descent direction of the objective function f (x) in the domain of problem (5) [26]. Therefore, the vector x t is updated after each iteration, using the following rule:…”
Section: B Parallel Low-complexity Iterative Algorithmmentioning
confidence: 99%
See 1 more Smart Citation