Tuning fuzzy rule-based systems for linguistic fuzzy modeling is an interesting and widely developed task. It involves adjusting some of the components of the knowledge base without completely redefining it. This contribution introduces a genetic tuning process for jointly fitting the fuzzy rule symbolic representations and the meaning of the involved membership functions. To adjust the former component, we propose the use of linguistic hedges to perform slight modifications keeping a good interpretability. To alter the latter component, two different approaches changing their basic parameters and using nonlinear scaling factors are proposed. As the accomplished experimental study shows, the good performance of our proposal mainly lies in the consideration of this tuning approach performed at two different levels of significance. The paper also analyzes the interaction of the proposed tuning method with a fuzzy rule set reduction process. A good interpretability-accuracy tradeoff is obtained combining both processes with a sequential scheme: first reducing the rule set and subsequently tuning the model.
Abstract. System modeling with fuzzy rule-based systems (FRBSs), i.e. fuzzy modeling (FM), usually comes with two contradictory requirements in the obtained model: the interpretability, capability to express the behavior of the real system in an understandable way, and the accuracy, capability to faithfully represent the real system. While linguistic FM (mainly developed by linguistic FRBSs) is focused on the interpretability, precise FM (mainly developed by Takagi-Sugeno-Kang FRBSs) is focused on the accuracy. Since both criteria are of vital importance in system modeling, the balance between them has started to pay attention in the fuzzy community in the last few years.The chapter analyzes mechanisms to find this balance by improving the interpretability in linguistic FM: selecting input variables, reducing the fuzzy rule set, using more descriptive expressions, or performing linguistic approximation; and in precise FM: reducing the fuzzy rule set, reducing the number of fuzzy sets, or exploiting the local description of the rules.
This paper introduces a new learning methodology to quickly generate accurate and simple linguistic fuzzy models: the cooperative rules (COR) methodology. It acts on the consequents of the fuzzy rules to find those that are best cooperating. Instead of selecting the consequent with the highest performance in each fuzzy input subspace, as ad-hoc data-driven methods usually do, the COR methodology considers the possibility of using another consequent, different from the best one, when it allows the fuzzy model to be more accurate thanks to having a rule set with the best cooperation. Our proposal has shown good results in solving three different applications when compared to other methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.