Proceedings. 1998 28th IEEE International Symposium on Multiple- Valued Logic (Cat. No.98CB36138)
DOI: 10.1109/ismvl.1998.679330
|View full text |Cite
|
Sign up to set email alerts
|

An error reducing approach to machine learning using multi-valued functional decomposition

Abstract: This paper considers minimization of incompletely specified multi-valued functions using functional decomposition. While functional decomposition was originally created for the minimization of logic circuits, this paper uses the decomposition process for both machine learning and logic synthesis of multi-valued functions. As it turns out, the minimization of logic circuits can be used in the concept of "learning" in machine learning, by reducing the complexity of a given data set. A main difference is that mac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
5
0

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…It is well-known that logic synthesis methods applied to binary functions with many don't cares (don't knows) are used as a base of various machine learning (ML) approaches [1,2,3]. The learning process creates circuit description and as a byproduct converts don't cares to cares trying to satisfy the Occam Razor Principle of the circuit's simplicity.…”
Section: The Concept Of Learning Quantum Behaviors From Examplesmentioning
confidence: 99%
“…It is well-known that logic synthesis methods applied to binary functions with many don't cares (don't knows) are used as a base of various machine learning (ML) approaches [1,2,3]. The learning process creates circuit description and as a byproduct converts don't cares to cares trying to satisfy the Occam Razor Principle of the circuit's simplicity.…”
Section: The Concept Of Learning Quantum Behaviors From Examplesmentioning
confidence: 99%
“…In past research, we have used and compared (using software) various network structures for learning: two-level AND/OR (sum-of-products or DNFs), 31 decision trees (using C4.5 software and multilevel decomposition structures. [21][22][23]32,33 We have also studied various logic, nonlogic, and mixed optimization methods, for example, search, 13 rule-based, set-covering, maximum clique, graph coloring, genetic algorithm (including mixtures of logic and genetic algorithm approaches), 34,35 genetic programming, 36 ANNs, and simulated annealing. We compared the resulting complexity of our networks (using an approach based on Occam's Razor), as well as various ways of controlling the number of errors in the learning process.…”
Section: Logic-based Learningmentioning
confidence: 99%
“…We compared the resulting complexity of our networks (using an approach based on Occam's Razor), as well as various ways of controlling the number of errors in the learning process. [21][22][23][24] Because of their strong theoretically proven properties, the decomposed function cardinality and its extensions for multiple-valued (MV) logic [20][21][22]33 work well as common measures of network complexity. 22,25 Our conclusion, based on these investigations, is that logic approaches (especially the MV-decomposition techniques) combined with smart heuristic strategies and good data representations usually provide superior results compared to other approaches.…”
Section: Logic-based Learningmentioning
confidence: 99%
See 2 more Smart Citations