2001
DOI: 10.1016/s0140-3664(00)00331-5
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical neuro-fuzzy call admission controller for ATM networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2004
2004
2016
2016

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(6 citation statements)
references
References 15 publications
0
6
0
Order By: Relevance
“…In this work, we have used the Generalised Approximate Reasoning-based Intelligent Control (GARIC) architecture of Berenji and Khedkar [24] as the basis of our system, primarily because of the facility it gives to combine actor-critic and neurofuzzy methods. It has been widely employed in similar works on intelligent control (e.g., [25][26][27]). GARIC consists of two neural networks: the Action Selection Network (ASN) operating as the actor and the Action Evaluation Network (AEN) which criticises the actions made by the ASN.…”
Section: Actor-critic Methodsmentioning
confidence: 99%
“…In this work, we have used the Generalised Approximate Reasoning-based Intelligent Control (GARIC) architecture of Berenji and Khedkar [24] as the basis of our system, primarily because of the facility it gives to combine actor-critic and neurofuzzy methods. It has been widely employed in similar works on intelligent control (e.g., [25][26][27]). GARIC consists of two neural networks: the Action Selection Network (ASN) operating as the actor and the Action Evaluation Network (AEN) which criticises the actions made by the ASN.…”
Section: Actor-critic Methodsmentioning
confidence: 99%
“…It obtains the knowledge through trial-and-error and interaction with environment to improve its behavior policy. Because of the advantages above, RL has been played a very important role in the flow control in high-speed networks [4][5][6][7][8].…”
Section: Introductionmentioning
confidence: 99%
“…So it has the ability of self-learning. Because of the advantages above, RL has been played a very important role in the flow control in high-speed networks [4][5][6][7]. The Q-learning algorithm of RL is easy for application and has a firm foundation in the theory.…”
Section: Introductionmentioning
confidence: 99%