2011
DOI: 10.1016/j.asoc.2010.12.003
|View full text |Cite
|
Sign up to set email alerts
|

Real-time CBR-agent with a mixture of experts in the reuse stage to classify and detect DoS attacks

Abstract: This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their pe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 37 publications
(56 reference statements)
0
7
0
Order By: Relevance
“…Some existing classifiers include bagging and boosting [23,24] techniques, which allow merging the outputs of several classifiers to improve the results of the individual ones. However, their results are not always satisfactory as we have proven in previous works [25,26]. In order to find the desired behavior for our system, we can revise the structure of the mixtures of experts used in artificial intelligence, which merge the information based on the output provided by several experts [27].…”
Section: Related Workmentioning
confidence: 99%
“…Some existing classifiers include bagging and boosting [23,24] techniques, which allow merging the outputs of several classifiers to improve the results of the individual ones. However, their results are not always satisfactory as we have proven in previous works [25,26]. In order to find the desired behavior for our system, we can revise the structure of the mixtures of experts used in artificial intelligence, which merge the information based on the output provided by several experts [27].…”
Section: Related Workmentioning
confidence: 99%
“…The computational cost of evolutionary algorithms is very high esspecialy in processing huge data. The same problem exists in [86], [94] where a bunch of classification algorithms used for evaluating the performance of multi-agent architecture. In [87], a knowledge-base with a reasoning algorithm is used, but still, the problem of learning needs to be fixed, because the prior knowledge is not available in the case of new attacks.…”
Section: ) Adaptation and Learningmentioning
confidence: 99%
“…Centralized. Adaptation, coordination and mobility 3 [7] Distributed Centralized/ Distributed Centralized/ Distributed Cooperation, scalability, adaptation and robustness 4 [9], [11], [72], [94] Distributed Distributed Centralized Scalability, load balancing, fault tolerance, and reasoning 5 [15] Distributed Centralized Centralized Self-learning and adaptation 6 [17], [18] Distributed Decentralized Centralized Distribution, Cooperation, adaptation and learning 7 [19] Distributed Centralized Centralized Lightweight, adaptation, dynamics 8 [20] Distributed Centralized.…”
Section: A Multi-agent Ids Architectural Properties and Characteristicsmentioning
confidence: 99%
“…The extraction of knowledge that is presented to the human expert is carried out using the J48 algorithm [21]. The J48 algorithm is the Java implementation of the C4.5 algorithm, an evolution of the original ID3 [20], whose main advantage is that it allows incorporates numerical attributes into the logical operations carried out in the test nodes. There are other alternatives for the generation of decision rules which operate similar to the decision trees, including RIPPER [22] and PART [23].…”
Section: Ai Techniquesmentioning
confidence: 99%
“…There are other alternatives for the generation of decision rules which operate similar to the decision trees, including RIPPER [22] and PART [23]. The J48 [20] algorithm attempts to minimize the width of the decision tree by using heavy search strategies. In summary, the algorithm defines two terms: gain and rate of gain with respect to the information I(S) contained in a node S. Using only the gain criteria, attributes with multiple values are more highly favored given that they can more easily divide the elements into numerous subsets.…”
Section: Ai Techniquesmentioning
confidence: 99%