Fuzzy logic methods have been used successfully in many real-world applications, but the foundations of fuzzy logic remain under attack. Taken together, these two facts constitute a paradox. A second paradox is that almost all of the successful fuzzy logic applications are embedded controllers, while most of the theoretical papers on fuzzy methods deal with knowledge representation and reasoning. I hope here to resolve these paradoxes by identifying which aspects of fuzzy logic render it useful in practice, and which aspects are inessential. My conclusions are based on a mathematical result, on a survey of literature on the use of fuzzy logic in heuristic control and in expert systems, and on practical experience developing expert systems. An apparent paradoxAs is natural in a research area as active as fuzzy logic, theoreticians have investigated many formal systems, and a variety of systems have been used in applications. Nevertheless, the basic intuitions have remained relatively constant. At its simplest, fuzzy logic is a generalization of standard propositional logic from two truth values, false and true, to degrees of truth between 0 and 1.Formally, let A denote an assertion. In fuzzy logic, A is assigned a numerical value t(A), called the degree of truth of A , such that 0 5 t(A) I 1. For a sentence composed from simple assertions and the logical connectives "and" (A), "or" (v), and "not" ( 1 ) degree of truth is defined as follows: MIT Press, 1993, pp 698-703 Definition 1: Let A and B be arbitrary as- sertions. Then t ( A A B ) = min [ t(A), t(B)) t(A v B ) = max { t ( A ) , t ( B ) ] t(A) = t(B) if , either t ( B ) = t ( A ) or t(B) = 1-t(A). WA direct proof of Theorem 1 appears in the sidebar, but it can also be proved using similar results couched in more abstractProposition: Let P be a finite Boolean algebra of propositions and let z be a truthassignment function P + [0,1], supposedly truth-functional via continuous connectives. Then for all p E P, Q) E { 0, 1 ] WThe link between Theorem 1 and this proposition is that l ( A A 4) = B v (4 A -IB) is a valid equivalence of Boolean algebra. Theorem 1 is stronger in that it relies on only one particular equivalence, while the proposition is stronger because it applies to any connectives that are truth-functional and continuous (as defined in its authors'The equivalence used in Theorem 1 is rather complicated, but it is plausible intupaper).itively, and it is natural to apply it in reasoning about a set of fuzzy rules, since 7 ( A A 4 ) and B v (4 A 4 ) are both reexpressions of the classical implication 4 4 B. It was chosen for this reason, but the same result can also be proved using many other ostensibly reasonable logical aquivalences.It is important to be clear on what exactly Theorem 1 says, and what it does not say. On the one hand, the theorem applies to any more general formal system that includes the four postulates listed in Definition 1. Any extension of fuzzy logic to accommodate first-order sentences, for example, collapses to two trut...
The maximum balanced biclique problem (MBBP), an NP-hard combinatorial optimization problem, has been attracting more attention in recent years. Existing node-deletion-based algorithms usually fail to find high-quality solutions due to their easy stagnation in local optima, especially when the scale of the problem grows large. In this paper, a new algorithm for the MBBP, evolutionary algorithm with structure mutation (EA/SM), is proposed. In the EA/SM framework, local search complemented with a repair-assisted restart process is adopted. A new mutation operator, SM, is proposed to enhance the exploration during the local search process. The SM can change the structure of solutions dynamically while keeping their size (fitness) and the feasibility unchanged. It implements a kind of large mutation in the structure space of MBBP to help the algorithm escape from local optima. An MBBP-specific local search operator is designed to improve the quality of solutions efficiently; besides, a new repair-assisted restart process is introduced, in which the Marchiori's heuristic repair is modified to repair every new solution reinitialized by an estimation of distribution algorithm (EDA)-like process. The proposed algorithm is evaluated on a large set of benchmark graphs with various scales and densities. Experimental results show that: 1) EA/SM produces significantly better results than the state-of-the-art heuristic algorithms; 2) it also outperforms a repair-based EDA and a repair-based genetic algorithm on all benchmark graphs; and 3) the advantages of EA/SM are mainly due to the introduction of the new SM operator and the new repair-assisted restart process.
A well-defined distance is critical for the performance of time series classification. Existing distance measurements can be categorized into two branches. One is to utilize handmade features for calculating distance, e.g., dynamic time warping, which is limited to exploiting the dynamic information of time series. The other methods make use of the dynamic information by approximating the time series with a generative model, e.g., Fisher kernel. However, previous distance measurements for time series seldom exploit the label information, which is helpful for classification by distance metric learning. In order to attain the benefits of the dynamic information of time series and the label information simultaneously, this paper proposes a multiobjective learning algorithm for both time series approximation and classification, termed multiobjective model-metric (MOMM) learning. In MOMM, a recurrent network is exploited as the temporal filter, based on which, a generative model is learned for each time series as a representation of that series. The models span a non-Euclidean space, where the label information is utilized to learn the distance metric. The distance between time series is then calculated as the model distance weighted by the learned metric. The network size is also optimized to learn parsimonious representations. MOMM simultaneously optimizes the data representation, the time series model separation, and the network size. The experiments show that MOMM achieves not only superior overall performance on uni/multivariate time series classification but also promising time series prediction performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.