2009 ISECS International Colloquium on Computing, Communication, Control, and Management 2009
DOI: 10.1109/cccm.2009.5267918
|View full text |Cite
|
Sign up to set email alerts
|

Cross-layer design of cognitive radio network for real time video streaming transmission

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
11
0

Year Published

2012
2012
2020
2020

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(11 citation statements)
references
References 9 publications
0
11
0
Order By: Relevance
“…In the following, we survey and classify the feasible and already proved resource allocation techniques applicable for multimedia transmission over CRNs in order to guarantee QoS/ QoE as listed in Table IX. B.2.1) Machine Learning-based Resource Allocations Techniques Machine learning is supposed to provide a mechanism to guide the system reconfiguration by knowing the environment perception results and device reconfigurability in order to maximize the utility of the available resources [260]. SUs are aware of their environment in nature, but in order to be Bayesian Model [146], [224], [225] Clustering Algorithm [77], [79] Genetic Algorithm [110], [128], [130], [226] Decision Tree [108] Markov Model [29], [42], [101], [121], [139], [219], [227]- [234] Multi-agent Learning [210], [212], [235], [236] Simulated Annealing [222] Game Theory…”
Section: B2)mentioning
confidence: 99%
See 2 more Smart Citations
“…In the following, we survey and classify the feasible and already proved resource allocation techniques applicable for multimedia transmission over CRNs in order to guarantee QoS/ QoE as listed in Table IX. B.2.1) Machine Learning-based Resource Allocations Techniques Machine learning is supposed to provide a mechanism to guide the system reconfiguration by knowing the environment perception results and device reconfigurability in order to maximize the utility of the available resources [260]. SUs are aware of their environment in nature, but in order to be Bayesian Model [146], [224], [225] Clustering Algorithm [77], [79] Genetic Algorithm [110], [128], [130], [226] Decision Tree [108] Markov Model [29], [42], [101], [121], [139], [219], [227]- [234] Multi-agent Learning [210], [212], [235], [236] Simulated Annealing [222] Game Theory…”
Section: B2)mentioning
confidence: 99%
“…A multi-agent learning model is applicable to CRNs, where SUs are the agents and the ultimate goal of the competition is to occupy the best available primary channel. This model has been studied by [210], [212], [235], [236] for multimedia transmission over CRNs.…”
Section: B2)mentioning
confidence: 99%
See 1 more Smart Citation
“…Larcher et al [19] propose a paradigm in which competing SUs dynamically exchange spectral resources and adapt their transmission strategies to improve the quality of delay-sensitive multimedia applications. Hang Qin and Yanrong Cui [20] propose a distributed resource management scheme, the RDMAL (Real time Distributed Multi-Agent Learning) algorithm, for use in cognitive radio networks. RDMAL uses the multi-agent learning approach to effectively exploit available frequency channels, and uses available interference information to achieve learning efficiency.…”
Section: Introductionmentioning
confidence: 99%
“…Yan Chen et al [22] study multimedia streaming over cognitive radio networks and propose three auction-based schemes (ACA-S, ACA-T, and ACA-A) to distributively allocate the spectrum. Although those studies [18][19][20][21][22] discuss resource allocation for multimedia traffic transmissions, their schemes are not suitably applied to SVC-encoded multimedia streams because they only study the issue of frame transmission delay but do not take frame priorities (such as the priority structure resulting from the encoding scheme of SVC) into consideration. SVC was standardized by the Joint Video Team (JVT) and ITU-T Video Coding Experts Group (VCEG) in 2007, and is still a new standard.…”
Section: Introductionmentioning
confidence: 99%