2012 International Conference on Data Science &Amp; Engineering (ICDSE) 2012
DOI: 10.1109/icdse.2012.6282304
|View full text |Cite
|
Sign up to set email alerts
|

Mining high dimensional association rules by generating large frequent k-dimension set

Abstract: Association rule mining aims at generating association rules between sets of items in a database. Now a day, due to huge accumulation in the database technology, the data are representing in the high dimensional data space. However, it is becoming very tedious to generate association rules from high dimensional data, because it contains different dimensions or attributes in the large data bases. In this paper, a method for generating association rules from large high dimensional data is proposed . It constitut… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 9 publications
0
2
0
Order By: Relevance
“…It is very important because the database may contain combinations of nominal or quantitative attributes. The process of combining attributes was described in [8]. The equivalent algorithm with some necessary modifications is used.…”
Section: A Preprocessing the Datasetmentioning
confidence: 99%
“…It is very important because the database may contain combinations of nominal or quantitative attributes. The process of combining attributes was described in [8]. The equivalent algorithm with some necessary modifications is used.…”
Section: A Preprocessing the Datasetmentioning
confidence: 99%
“…Singh and Agarwal [10] propose a new optimized algorithm and to compare its performance with the existing data mining algorithms. Prasanna and Seetha [11] present a method for generating association rules from large high dimensional data, which can obtain more rapid computing speed and sententious rules. And then, probability-based algorithm, one of an incremental algorithm, is researched by Ariya and Kreesuradej [12], which applies the principle of Bernoulli trial to predict expected frequent item sets for reducing collected border item sets and a number of times to rescan the original database.…”
Section: Introductionmentioning
confidence: 99%