2018
DOI: 10.1109/twc.2018.2818125
|View full text |Cite
|
Sign up to set email alerts
|

Compressive Channel Estimation and Multi-User Detection in C-RAN With Low-Complexity Methods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(20 citation statements)
references
References 30 publications
0
20
0
Order By: Relevance
“…However, such large scale optimization easily converged to a bad local minimum [40]. The layer-wise training strategy is used to separate (16) into the following two parts, i.e., (17) and (18), where (17) is to find a good initialization of (18) to avoid bad local minima of (16). In particular, for training a k-layer network, we denote Θ 0:k−1 as the trainable parameters from layer 0 to layer k − 1, and Θ k−1 as the trainable parameters in layer k − 1.…”
Section: B Deep Neural Network Training and Testingmentioning
confidence: 99%
“…However, such large scale optimization easily converged to a bad local minimum [40]. The layer-wise training strategy is used to separate (16) into the following two parts, i.e., (17) and (18), where (17) is to find a good initialization of (18) to avoid bad local minima of (16). In particular, for training a k-layer network, we denote Θ 0:k−1 as the trainable parameters from layer 0 to layer k − 1, and Θ k−1 as the trainable parameters in layer k − 1.…”
Section: B Deep Neural Network Training and Testingmentioning
confidence: 99%
“…Problem ( 2) is a non-differentiable optimization problem for which classical sub-gradient methods often exhibit a rather slow convergence. As discussed in Section I, several first-order methods have been applied to solve (2) such as FISTA [7], ProxGradient [8] and ADMM [9], [5]. However, in order to achieve fast convergence, these methods require either a centralized step such as the line search routine in the Prox-Gradient method or a pre-conditioner that scales all variables in advance.…”
Section: System Model and Problem Formulationmentioning
confidence: 99%
“…Algorithm 1 outlines a tailored version of ALADIN [10] for solving (5). The algorithm also has two main steps, a parallelizable step and a consensus step.…”
Section: A Augmented Lagrangian Based Alternating Direction Inexact N...mentioning
confidence: 99%
See 1 more Smart Citation
“…However, AMP algorithms often fail to converge when the preamble signature matrix is mildly ill-conditioned or non-Gaussian [11]. The authors in [12], [13] introduced a mixed 𝑙 1 /𝑙 2 -norm convex relaxation method to reformulate the problem as a form of group LASSO. Many methods can be used to solve the LASSO problem, such as interior-point [12] method and iterative shrinkage thresholding algorithm (ISTA) [14] method.…”
Section: Introductionmentioning
confidence: 99%