2008
DOI: 10.1007/s10618-008-0092-3
|View full text |Cite
|
Sign up to set email alerts
|

Using association rules to mine for strong approximate dependencies

Abstract: In this paper we deal with the problem of mining for approximate dependencies (AD) in relational databases. We introduce a definition of AD based on the concept of association rule, by means of suitable definitions of the concepts of item and transaction. This definition allow us to measure both the accuracy and support of an AD. We provide an interpretation of the new measures based on the complexity of the theory (set of rules) that describes the dependence, and we employ this interpretation to compare the n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0
3

Year Published

2009
2009
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(13 citation statements)
references
References 27 publications
(36 reference statements)
0
10
0
3
Order By: Relevance
“…Another approach to harness the complexity is to use heuristics to prune potentially uninteresting AFD candidates [40]. Because this can cause the loss of interesting results, Pyro instead discovers all approximate dependencies for some given error threshold and leaves filtering or ranking of the dependencies to use-case specific post-processing.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Another approach to harness the complexity is to use heuristics to prune potentially uninteresting AFD candidates [40]. Because this can cause the loss of interesting results, Pyro instead discovers all approximate dependencies for some given error threshold and leaves filtering or ranking of the dependencies to use-case specific post-processing.…”
Section: Related Workmentioning
confidence: 99%
“…Apart from dependency discovery, several other problems, such as frequent itemset mining [3,14], belong in this category and can be tackled with the same algorithmic foundations [32]. For instance, AFD discovery can be modeled as a frequent itemset mining problem; however, such adaptations require additional tailoring to be practically usable [40].…”
Section: Related Workmentioning
confidence: 99%
“…Approximate dependencies were later defined to capture dependencies with exceptions. The "quality" of an approximated dependency is evaluated by measures such as support and confidence, whose definitions derive from that of association rules [10].…”
Section: Functional Approximate and Temporal Dependenciesmentioning
confidence: 99%
“…Like the RBFNN, the connection weight vector (1) i w of the first hidden layer may be beforehand randomly selected from the total training sample vectors or determined by some adaptive methods [10] (such as k-means clustering, Kohonen training method , LBG, RPCL [11], etc.). Thus the output i y corresponding to the i-th hidden node can be expressed as…”
Section: The Neural Network Trainingmentioning
confidence: 99%