ICC 2020 - 2020 IEEE International Conference on Communications (ICC) 2020
DOI: 10.1109/icc40277.2020.9149283
|View full text |Cite
|
Sign up to set email alerts
|

Data-Aided Channel Estimator for MIMO Systems via Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
38
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(41 citation statements)
references
References 9 publications
0
38
0
Order By: Relevance
“…Recently, a reinforcement learning (RL) approach was introduced in [26] for dataaided channel estimation. In this approach, a Markov decision process (MDP) problem is described to minimize the estimation error, and an RL algorithm is used to solve the MDP problem.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Recently, a reinforcement learning (RL) approach was introduced in [26] for dataaided channel estimation. In this approach, a Markov decision process (MDP) problem is described to minimize the estimation error, and an RL algorithm is used to solve the MDP problem.…”
Section: Introductionmentioning
confidence: 99%
“…However, this solution is difficult to implement in practical systems because of its considerable complexity and latency in computing the optimal policy. For example, using the approach in [26] to calculate the optimal policy requires all a posteriori probabilities (APPs) in a data block. In addition, its limitation is that the optimal policy is characterized by a specific discounting factor.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…An RL algorithm for optimizing the symbol vector selection of data-aided channel estimation was first introduced in our prior work [1]. In this algorithm, the optimal policy of the MDP is derived under a simplistic assumption that underestimates the effect of future actions and rewards.…”
Section: Introductionmentioning
confidence: 99%
“…In this paper, we generalize the RL algorithm in [1] by employing the MCTS approach which provides a more accurate evaluation of the effect of the future actions and rewards. In addition to this major change, we newly introduce the semi-data-aided channel estimation strategy to further reduce the delay required for updating the channel estimate and also introduce the data re-detection strategy to improve detection performance after the symbol vector selection.…”
Section: Introductionmentioning
confidence: 99%