2022
DOI: 10.1109/tmtt.2022.3209658
|View full text |Cite
|
Sign up to set email alerts
|

Digital Predistortion of RF Power Amplifiers With Decomposed Vector Rotation-Based Recurrent Neural Networks

Abstract: In this article, we present a novel decomposed vector rotation (DVR)-based recurrent neural network behavioral model for digital predistortion (DPD) of radio frequency (RF) power amplifiers (PAs) in wideband scenarios. By representing memory terms of DVR with recurrent states and redesigning the piecewise modeling, we propose a novel recurrent DVR scheme. To ensure stable operation and enhanced modeling accuracy, we integrate the recurrent DVR into the gated learning mechanism of the modified Just Another NETw… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 30 publications
(51 reference statements)
0
7
0
Order By: Relevance
“…For the RNN cells, we take inspiration from [8] and [9]. In line with the PA physics, Just Another Network (JANET) [11] was identified as a suitable, lightweight RNN cell for DPD in [8].…”
Section: B Rnn Cellmentioning
confidence: 99%
See 2 more Smart Citations
“…For the RNN cells, we take inspiration from [8] and [9]. In line with the PA physics, Just Another Network (JANET) [11] was identified as a suitable, lightweight RNN cell for DPD in [8].…”
Section: B Rnn Cellmentioning
confidence: 99%
“…Different from the LSTM cell used in [5], a JANET cell consists of only a single sigmoidbased (σ ) gating mechanism (forget gate), combined with the hyperbolic input activation (tanh). The JANET concept is refined in [9] with a more tailored RNN cell (DVR-JANET), with additional dedicated filters for phase and envelope, based on the DVR concept [12]. In addition, separate hidden states are introduced for the I and Q parts, which are jointly gated with a common forget gate.…”
Section: B Rnn Cellmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep, convolutional, and residual feed-forward NN structures are discussed in [4], [7], [8], and [9], which all rely on a similar input data configuration with decomposed I and Q inputs, and the same approach has been employed for linearizing a load-modulated balanced PA [10], beamforming, or MIMO transmitters [11], [12], [13], for joint DPD and PAPR reduction [14], or self-interference cancellation in fullduplex radio [15], [16]. Recurrent NN (RNN) structures have also been studied as an alternative to feed-forward NNs, suited especially for strong PA memory effects [17], [18], [19], [20]. RNNs are, however, complex to train, since the recurrent structures need to be unrolled to ensure temporal consistency during training.…”
Section: Introductionmentioning
confidence: 99%
“…However, the involved linear phase recovery limits the overall modeling capability. Furthermore, models based on recurrent NNs have been proposed with particular aim on improving the modeling of memory effects in [21], [22], and [25], however, typically at the cost of increased training and convergence time [18]. Additionally, generalized NN models for coping with different transmission configurations have been explored, e.g., in [26], [27], and [28].…”
mentioning
confidence: 99%