Computing with words (CWW) relies on linguistic representation of knowledge that is processed by operating at the semantical level defined through fuzzy sets. Linguistic representation of knowledge is a major issue when fuzzy rule based models are acquired from data by some form of empirical learning. Indeed, these models are often requested to exhibit interpretability, which is normally evaluated in terms of structural features, such as rule complexity, properties on fuzzy sets and partitions. In this paper we propose a different approach for evaluating interpretability that is based on the notion of cointension. The interpretability of a fuzzy rule-based model is measured in terms of cointension degree between the explicit semantics, defined by the formal parameter settings of the model, and the implicit semantics conveyed to the reader by the linguistic representation of knowledge. Implicit semantics calls for a representation of user's knowledge which is difficult to externalise. Nevertheless, we identify a set of properties - which we call "logical view" - that is expected to hold in the implicit semantics and is used in our approach to evaluate the cointension between explicit and implicit semantics. In practice, a new fuzzy rule base is obtained by minimising the fuzzy rule base through logical properties. Semantic comparison is made by evaluating the performances of the two rule bases, which are supposed to be similar when the two semantics are almost equivalent. If this is the case, we deduce that the logical view is applicable to the model, which can be tagged as interpretable from the cointension viewpoint. These ideas are then used to define a strategy for assessing interpretability of fuzzy rule-based classifiers (FRBCs). The strategy has been evaluated on a set of pre-existent FRBCs, acquired by different learning processes from a well-known benchmark dataset. Our analysis highlighted that some of them are not cointensive with user's knowledge, hence their linguistic representation is not appropriate, even though they can be tagged as interpretable from a structural point of view
A key feature for machine intelligence is the ability of learning knowledge from past experiences. Furthermore, in a human-centric environment, the acquired knowledge must fulfill comprehensibility requirements so as to be shared by human users. In literature, several approaches have been proposed to acquire comprehensible knowledge from data by preserving a number of interpretability constraints, especially for Fuzzy Rule-Based Classifiers (FRBCs). As a general result, accuracy and interpretability emerge as conflicting features, so that a tradeoff is often required. In consequence of this tradeoff, the resulting FRBCs are provided with a knowledge base expressed in natural language but, as a matter of fact, the semantics embedded by the linguistic structures might not be cointensive with the explicit semantics defined in the knowledge base. As an alternative approach, in this paper we propose a technique to design FRBCs from data with the specific aim of maximizing interpretability in the sense of semantic cointension. The most important result of this approach is to control cointension so as to select models that possess knowledge bases that users can understand on the basis of their natural language description. This enables the use of the FRBC in a humancentric environment. Experimental sessions are performed on benchmark classification problems to show the effectiveness of the proposed approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.