2020
DOI: 10.1109/mmm.2020.3023220
|View full text |Cite
|
Sign up to set email alerts
|

Beyond the Moore-Penrose Inverse: Strategies for the Estimation of Digital Predistortion Linearization Parameters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(5 citation statements)
references
References 36 publications
0
4
0
Order By: Relevance
“…As a variant of the ANN, the ELM does not require gradient-based backpropagation to adjust the weights but sets the weights through the Moore-Penrose generalized inverse [20]. The standard ELM neural network structure is shown in Fig.…”
Section: Related Work 21 Extreme Learning Machinementioning
confidence: 99%
“…As a variant of the ANN, the ELM does not require gradient-based backpropagation to adjust the weights but sets the weights through the Moore-Penrose generalized inverse [20]. The standard ELM neural network structure is shown in Fig.…”
Section: Related Work 21 Extreme Learning Machinementioning
confidence: 99%
“…While linear-in-parameter models, such as GMP-based piecewise models, can be identified using a closed-form least-squares solution [39], NNs require iterative optimization techniques that use gradient estimates to converge the model parameters. In this work, we use the adaptive moment estimation (Adam) optimizer, which is an extension to the stochastic gradient descent algorithm, maintaining a per-parameter learning rate based on the first and second moments of the gradients [40].…”
Section: End-to-end Nn Trainingmentioning
confidence: 99%
“…These circumstances are well-arduous for a decent DPD system. Hence, two kinds of precautionary measures can be taken to alleviate the ill-conditioning issues in ADPD, that is 1) pruning of PD polynomial model to update relevant coefficient sets of basis function [183]- [185]; 2) applying regularization matrix to steady the training samples of weight vector especially in iterative online linearization [60], [100], [179], [186]- [188].…”
Section: ) Adaptive Dpd (Adpd)mentioning
confidence: 99%