2022
DOI: 10.1109/tmtt.2021.3124226
|View full text |Cite
|
Sign up to set email alerts
|

Inverse Covariance Matrix Estimation for Low-Complexity Closed-Loop DPD Systems: Methods and Performance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 47 publications
0
7
0
Order By: Relevance
“…In the first category, the model parameterization is allowed to access the iterative procedure until the optimal convergence. It becomes more challenging during recalculating the basis function matrix in the case when a signal direction is modified in real-world transmission [105]. While the second and third operations are the most resource-intensive categories, the model's output signal calculations will be affected by the number of model coefficients and training samples in terms of FLOPs.…”
Section: • Complexity Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…In the first category, the model parameterization is allowed to access the iterative procedure until the optimal convergence. It becomes more challenging during recalculating the basis function matrix in the case when a signal direction is modified in real-world transmission [105]. While the second and third operations are the most resource-intensive categories, the model's output signal calculations will be affected by the number of model coefficients and training samples in terms of FLOPs.…”
Section: • Complexity Discussionmentioning
confidence: 99%
“…While the second and third operations are the most resource-intensive categories, the model's output signal calculations will be affected by the number of model coefficients and training samples in terms of FLOPs. To avoid such an intensive computation, it is advisable to divide the vector components of the DPD input data symbols into a covariance matrix that constitutes an efficient inverse matrix properties [105], [106].…”
Section: • Complexity Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…( 24) can optimize and keep a steady convergence. The error vector is defined as g ¼ ½g 1 ; g 2 ; : : : ; g N T , and the N × N Jacobian matrix J is defined with entries J i;j ¼ ∂g i ∕∂w j , i; j ¼ 1; : : : ; N. Equation ( 24) can then be written as the following matrix form: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 5 ; 1 1 6 ; 5 7 4 wðkÞ ¼ wðk − 1Þ − μðJ T JÞ −1 J T g; (25) where J is the partial derivative matrix, where the ði; jÞ'th element is J i;j ¼ ∂g i ∕∂w j . The adoption of the GN iteration helps to save the cumbersome calculation of the second-order derivative, which can greatly reduce the complexity of the algorithm.…”
Section: Gauss-newton Iterationmentioning
confidence: 99%
“…Aiming at the nonlinear characteristics in the VLC systems, we propose to use the GN iteration to find the solution of the nonlinear problem. With GN method, it is intended to obtain high estimation accuracy and fast convergence, and achieve excellent overall distorting performance during solving the unconstrained nonlinear least squares problems 25 , 26 …”
Section: Implementation Algorithmsmentioning
confidence: 99%