1993
DOI: 10.1007/bf01066545
|View full text |Cite
|
Sign up to set email alerts
|

Database management and analysis tools of machine induction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0
1

Year Published

1999
1999
2014
2014

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 9 publications
0
9
0
1
Order By: Relevance
“…Since every internal node of T is a predictive attribute, the hidden variable C appears in every component BN. This fact implies that the component BN at each leaf of T does not represent only one cluster as Fisher and Hapanyengwi (1993) propose but a context-specific data clustering. That is, the data clustering encoded by each component BN is totally unrelated to the data clusterings encoded by the rest.…”
Section: Rbmns For Data Clusteringmentioning
confidence: 99%
See 2 more Smart Citations
“…Since every internal node of T is a predictive attribute, the hidden variable C appears in every component BN. This fact implies that the component BN at each leaf of T does not represent only one cluster as Fisher and Hapanyengwi (1993) propose but a context-specific data clustering. That is, the data clustering encoded by each component BN is totally unrelated to the data clusterings encoded by the rest.…”
Section: Rbmns For Data Clusteringmentioning
confidence: 99%
“…A previous work with the same aim is where Fisher and Hapanyengwi (1993) propose to perform data clustering based upon a decision tree. The measure used to select the divisive attribute at each node during the decision tree construction consists of the computation of the sum of information gains over all attributes, while in the supervised paradigm the measure is limited to the information gain over a single specified class attribute.…”
Section: Rbmns For Data Clusteringmentioning
confidence: 99%
See 1 more Smart Citation
“…One of them is chosen as the root of multi-level taxonomy. To make that choice we use the partition utility function (see Fisher (1996), Fisher and Hapanyengwi (1993) for details). Our taxonomy formation algorithm picks the one-level taxonomy with the greatest partition utility value as the one which guides the split at this level of the tree.…”
Section: Partition Utility and Creating Many-level Taxonomiesmentioning
confidence: 99%
“…In particular, 1. due to their simplicity and arguably clear interpretation, measures of association introduced by Goodman and Kruskal (1954) are used in lieu of χ 2 -like Cramer's V statistics applied in 49er (Troxel et al, 1994;Żytkow and Zembowicz, 1996). 2. instead of using the original heuristic approach intended to minimize the description length of the taxonomy, the choice of a one-level taxonomy to be attached under a given node is based on maximization of the so-called partition utility (as suggested, e.g., by Fisher and Hapanyengwi (1993) and based either on entropy or on the Gini index);…”
Section: Introductionmentioning
confidence: 99%