Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2010
DOI: 10.1109/tvt.2009.2034749
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Spacing of a Linearly Interpolated Complex-Gain LUT Predistorter

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(23 citation statements)
references
References 17 publications
0
23
0
Order By: Relevance
“…Several behavioral models and inverse models for nonlinear PAs with memory effects have been proposed in the literature, offering a good inverse model for indirect DPD in most cases [13]- [22]. Among these models, the lookup table (LUT)-based models are generally assumed to be simple to implement; however, optimal spacing and proper bin selection are required for best results [14], [15]. Moreover, when memory effects are considered, a large number of variables need to be stored for cascaded LUTs [16], [17].…”
Section: Pa Behavioral Modeling and Inverse Modeling For Indirectmentioning
confidence: 99%
See 1 more Smart Citation
“…Several behavioral models and inverse models for nonlinear PAs with memory effects have been proposed in the literature, offering a good inverse model for indirect DPD in most cases [13]- [22]. Among these models, the lookup table (LUT)-based models are generally assumed to be simple to implement; however, optimal spacing and proper bin selection are required for best results [14], [15]. Moreover, when memory effects are considered, a large number of variables need to be stored for cascaded LUTs [16], [17].…”
Section: Pa Behavioral Modeling and Inverse Modeling For Indirectmentioning
confidence: 99%
“…Moreover, parameter extraction is possible by rewriting (13) in a matrix form y = Aφ DRF−MFOD (14) where y is the output vector with the dimension of L × 1 and L is the length of the training data; and, φ DRF−MFOD is the coefficient vector, which is defined as . .…”
Section: B Rational-function Parameter Extractionmentioning
confidence: 99%
“…Linear interpolation greatly reduces the LUT approximation errors and enables significant reduction of the required LUT size [6,29]. If linear interpolation is used, for each feedback sample magnitude |y k | falling between addresses n and n + 1, the interpolated complex-gain is…”
Section: Updating a Linearly-interpolated Lutmentioning
confidence: 99%
“…The spectral regrowth is significantly reduced. The spectral floor using ZOH is 2 to 3 dB higher due to the intrinsic half-bit excess quantization noise of the ZOH as compared to the linear interpolation [29]. Therefore, even when the feedforward predistorter is chosen to be linearly interpolated, the nearest neighbor adaptation can be used in the update branch of the indirect learning architecture, without much performance penalty.…”
Section: Updating a Linearly-interpolated Lutmentioning
confidence: 99%
“…In this article, we propose a set of new orthonormal basis functions to eliminate the correlation among different monomial terms as well as the correlation among data samples. The proposed orthonormal basis functions can be predetermined and implemented with look up tables (LUTs), which do not increase online computational complexity [15,16]. The fixed-point implementation is feasible and online computational complexity is greatly reduced.…”
Section: Introductionmentioning
confidence: 99%