2010 IEEE 71st Vehicular Technology Conference 2010
DOI: 10.1109/vetecs.2010.5493950
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Q-Learning for Interference Control in OFDMA-Based Femtocell Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
81
0

Year Published

2013
2013
2017
2017

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 79 publications
(82 citation statements)
references
References 6 publications
1
81
0
Order By: Relevance
“…In these studies, the solutions rely on finding subchannel allocations for the femtocells in a way that interference is mitigated by frequency diversity. These approaches are basically formal implementations of LTE's ICIC [13], [14]. It is also possible to enhance interference mitigation in white space networks by going beyond the information available about spectrum availability in a regulator's approved databases.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In these studies, the solutions rely on finding subchannel allocations for the femtocells in a way that interference is mitigated by frequency diversity. These approaches are basically formal implementations of LTE's ICIC [13], [14]. It is also possible to enhance interference mitigation in white space networks by going beyond the information available about spectrum availability in a regulator's approved databases.…”
Section: Related Workmentioning
confidence: 99%
“…Under these assumptions, and considering that due to clustering each CPE reports SNIR values similar to each other, the control loop can be modeled as shown in Eq. (13). A schematic diagram of the complete system is shown in Fig.…”
Section: Stability Analysis and Convergencementioning
confidence: 99%
“…The considered cost of 500 is only for the case that the total transmit power P n tot is larger than the maximum transmit power of a BS. It provides the best performance and convergence trade-off in our simulations as shown in Section VI-D and has been heuristically selected [34]. Being in state s after selecting action a and receiving the immediate cost c, the agent updates its knowledge Q(s, a) for this particular state-action pair as follows:…”
Section: B Q-learning Based Time-domain Icicmentioning
confidence: 99%
“…where α = 0.5 is the player's willingness to learn from its environment, λ = 0.9 is the discount factor, and s ′ is the next state [34], [35]. Hereby, the agent's previous knowledge about the state-action pair (s, a) is represented by the first term in (9).…”
Section: B Q-learning Based Time-domain Icicmentioning
confidence: 99%
“…More power control algorithms for the HetNets have been proposed for both uplink [15,20] and downlink scenarios [21,22] based on optimization frameworks with objectives of minimizing the transmit power of small cells under a required SINR constraint. In addition, learning based noncooperative power control algorithms [23][24][25] were proposed. In 3GPP Rel-10 standard, enhanced intercell interference coordination (eICIC) technique has been introduced for interference mitigation in the HetNets.…”
Section: Related Workmentioning
confidence: 99%