Fuzzy rule interpolation (FRI) offers an effective approach for making inference possible in sparse rule-based systems (and also for reducing the complexity of fuzzy models). However, requirements of fuzzy systems may change over time and hence, the use of a static rule base may affect the accuracy of FRI applications. Fortunately, an FRI system in action will produce interpolated rules in abundance during the interpolative reasoning process. While such interpolated results are discarded in existing FRI systems, they can be utilized to facilitate the development of a dynamic rule base in supporting subsequent inference. This is because the otherwise relinquished interpolated rules may contain possibly valuable information, covering regions that were uncovered by the original sparse rule base. This paper presents a dynamic fuzzy rule interpolation (D-FRI) approach by exploiting such interpolated rules in order to improve the overall system's coverage and efficacy. The resulting D-FRI system is able to select, combine, and generalize informative, frequently used interpolated rules for merging with the existing rule base while performing interpolative reasoning. Systematic experimental investigations demonstrate that D-FRI outperforms conventional FRI techniques, with increased accuracy and robustness. Furthermore, D-FRI is herein applied for network security analysis, in devising a dynamic intrusion detection system (IDS) through integration with the Snort software, one of the most popular open source IDSs. This integration, denoted as D-FRI-Snort hereafter, delivers an extra amount of intelligence to predict the level of potential threats. Experimental results show that with the inclusion of a dynamic rule base, by generalising newly interpolated rules based on the current network traffic conditions, D-FRI-Snort helps reduce both false positives and false negatives in intrusion detection.
Classifier ensembles constitute one of the main research directions in machine learning and data mining. The use of multiple classifiers generally allows better predictive performance than that achievable with a single model. Several approaches exist in the literature that provide means to construct and aggregate such ensembles. However, these ensemble systems contain redundant members that, if removed, may further increase group diversity and produce better results. Smaller ensembles also relax the memory and storage requirements, reducing system's run-time overhead while improving overall efficiency. This paper extends the ideas developed for feature selection problems to support classifier ensemble reduction, by transforming ensemble predictions into training samples, and treating classifiers as features. Also, the global heuristic harmony search is used to select a reduced subset of such artificial features, while attempting to maximize the feature subset evaluation. The resulting technique is systematically evaluated using high dimensional and large sized benchmark datasets, showing a superior classification performance against both original, unreduced ensembles, and randomly formed subsets.
This is the author accepted manuscript. The final version is available from Springer via http://dx.doi.org/10.1007/s10462-015-9428-8Many strategies have been exploited for the task of feature selection, in an effort to identify more compact and better quality feature subsets. A number of evaluation metrics have been developed recently that can judge the quality of a given feature subset as a whole, rather than assessing the qualities of individual features. Effective techniques of stochastic nature have also emerged, allowing good quality solutions to be discovered without resorting to exhaustive search. This paper provides a comprehensive review of the most recent methods for feature selection that originated from nature inspired meta-heuristics, where the more classic approaches such as genetic algorithms and ant colony optimisation are also included for comparison. A good number of the reviewed methodologies have been significantly modified in the present, in order to systematically support generic subset-based evaluators and higher dimensional problems. Such modifications are carried out because the original studies either work exclusively with certain subset evaluators (e.g., rough set-based methods), or are limited to specific problem domains. A total of ten different algorithms are examined, and their mechanisms and work flows are summarised in an unified manner. The performance of the reviewed approaches are compared using high dimensional, real-valued benchmark data sets. The selected feature subsets are also used to build classification models, in an effort to further validate their efficacies.authorsversionPeer reviewe
Many search strategies have been exploited in implementing feature selection, in an effort to identify smaller and better subsets. Such work typically involves the use of heuristics in one form or another. In this paper two novel methods are presented by applying harmony search to feature selection. In particular, it demonstrates the potential of utilising this search mechanism in combination with fuzzy-rough feature evaluation. The resulting techniques are compared with approaches that rely on hill-climbing, genetic algorithms and particle swarm optimisation.preprin
Many strategies have been exploited for the task of feature selection, in an effort to identify more compact and better quality feature subsets. The development of natureinspired stochastic search techniques allows multiple good quality feature subsets to be discovered without resorting to exhaustive search. In particular, harmony search is a recently developed technique mimicking musicians' experience, which has been effectively utilised to cope with feature selection problems. In this paper, a self-adjusting approach is proposed for feature selection with an aim to further enhance the performance of the existing harmony search-based method. This novel approach includes three dynamic strategies: restricted feature domain, harmony memory consolidation, and pitch adjustment. Systematic experimental evaluations using high dimensional, real-valued benchmark data sets are conducted in order to verify the efficacy of the proposed work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.