49th IEEE Conference on Decision and Control (CDC) 2010
DOI: 10.1109/cdc.2010.5717192
|View full text |Cite
|
Sign up to set email alerts
|

Optimal cross-layer wireless control policies using TD learning

Abstract: Abstract-We present an on-line crosslayer control technique to characterize and approximate optimal policies for wireless networks. Our approach combines network utility maximization and adaptive modulation over an infinite discrete-time horizon using a class of performance measures we call time smoothed utility functions. We model the system as an averagecost Markov decision problem. Model approximations are used to find suitable basis functions for application of least squares TD-learning techniques. The app… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2011
2011
2011
2011

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 11 publications
0
1
0
Order By: Relevance
“…The preliminary version of this work [13] and the related conference article [24] use these techniques for TD-learning, and motivate the approach through Taylor series approximations. In the present work, these arguments are refined to obtain explicit bounds on the Bellman error.…”
mentioning
confidence: 99%
“…The preliminary version of this work [13] and the related conference article [24] use these techniques for TD-learning, and motivate the approach through Taylor series approximations. In the present work, these arguments are refined to obtain explicit bounds on the Bellman error.…”
mentioning
confidence: 99%