2013
DOI: 10.1017/s1759078713000159
|View full text |Cite
|
Sign up to set email alerts
|

New order selection technique using information criteria applied to SISO and MIMO systems predistortion

Abstract: This paper presents a new order selection technique of matrix memory polynomial technique that models the nonlinearities of single-branch and multi-branch transmitters. The new criteria take into account the complexity of the model in addition to its mean-square error in the selection criteria. The quasi-convexity of the proposed criteria was proven in this work. By using this proposed Akaike information criterion (AIC) and Bayesian information criterion (BIC) criteria, the model order selection was cast as a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
6
0

Year Published

2013
2013
2018
2018

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 16 publications
0
6
0
Order By: Relevance
“…The general character of the CVS model provides a complete set of regressors with a high number of coefficients and, at the same time, an associated suitability for a pruning procedure without a presumption on the significant regressors. The application of the Bayesian information criterion (BIC) was proposed in [13], [14], [16] to select the significant parameters. Whereas [16] is based on a simulated annealing algorithm with all the possible variants of the model, the approach in [13] is based on the application of the OMP algorithm to represent the PA output as its projection onto the span of Volterra regressors, and on the BIC rule to discard the irrelevant coefficients, maintaining only the active regressors.…”
Section: B Identification Proceduresmentioning
confidence: 99%
See 1 more Smart Citation
“…The general character of the CVS model provides a complete set of regressors with a high number of coefficients and, at the same time, an associated suitability for a pruning procedure without a presumption on the significant regressors. The application of the Bayesian information criterion (BIC) was proposed in [13], [14], [16] to select the significant parameters. Whereas [16] is based on a simulated annealing algorithm with all the possible variants of the model, the approach in [13] is based on the application of the OMP algorithm to represent the PA output as its projection onto the span of Volterra regressors, and on the BIC rule to discard the irrelevant coefficients, maintaining only the active regressors.…”
Section: B Identification Proceduresmentioning
confidence: 99%
“…The application of the Bayesian information criterion (BIC) was proposed in [13], [14], [16] to select the significant parameters. Whereas [16] is based on a simulated annealing algorithm with all the possible variants of the model, the approach in [13] is based on the application of the OMP algorithm to represent the PA output as its projection onto the span of Volterra regressors, and on the BIC rule to discard the irrelevant coefficients, maintaining only the active regressors. It is worth noticing that while LS identification is affected by regressor correlation, the OMP algorithm guarantees the recovery of the exact value of the coefficients in a noiseless environment in a given number of iterations [17].…”
Section: B Identification Proceduresmentioning
confidence: 99%
“…This paper proposes a new metric which can be used in to evaluate the trade-off between model accuracy and complexity achieved by a behavioural model. In contrast to [16,17,23,24], which attempt to solve an optimisation problem (which might be computationally expensive depending on the problem size), this paper proposes a generalised metric which can be utilised by designers to compare various models and choose among them, regardless of the particulars of the model used and its structure. Indeed, in [16,17,23], the compressive sensing and least absolute shrinkage and selection (LASSO) approaches were used to extract a subset of relevant coefficients.…”
Section: Introductionmentioning
confidence: 99%
“…Both methods require a priori knowledge of the memory depth and non-linearity order in order to obtain the shortest subset of coefficients within the full model. In [24], the AIC and BIC derivations assume that the modelling error follows a Gaussian distribution. In contrast, the CAN metric proposed in this paper is general and does not have such constraint.…”
Section: Introductionmentioning
confidence: 99%
“…Again, a compact and effective DPD linearizer is constructed and demonstrated. The paper by Mehdi et al [3] applies model-order reduction techniques such as Akaike and Bayesian information criteria to reduce the number of model parameters in a systematic manner, again trading off model complexity against accuracy to arrive at the optimum model. This approach has been applied to build pre-distorters for traditional single input–single-output PAs and also multiple input–multiple output (MIMO) PA systems.…”
mentioning
confidence: 99%