2015
DOI: 10.1007/978-3-319-21024-7_8
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Algorithm for the Integration of the Imputation of Missing Values and Clustering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 4 publications
0
1
0
Order By: Relevance
“…Efficient missing value imputation (Patil et al, 2010) Technique is generalized and can be utilized for many data sets (Ishay and Herman, 2015) Impute missing values and build clusters as a unified integrated process (Abdallah and Shimshoni, 2016) K-means þ radial basis function (RBF) Faster convergence speed, higher stability, accuracy (Shi et al, 2018) Local least squares Local data clustering being incorporated for improved quality and efficiency (Keerin et al, 2013) Multiple kernel density Accuracy and efficiency (Liao et al, 2018) Rough set Handles the uncertainty and vagueness existing in data sets (Amiri and Jensen, 2016) Less computational complexity (Azam et al, 2018) Overcome the problem of crispness (Raja et al, 2019) (continued ) Shell neighbor Fills in an incomplete instance in a given data set by only using its left and right nearest neighbors with respect to each factor (attribute) and generalized to deal with data sets of mixed attributes (Zhang, 2011) Sliding window Applicable for IoT devices' data (Kolomvatsos et al, 2019) Soft cluster Overcomes the problems of inconsistency (Raja and Thangavel, 2016) Decision tree Branch-exclusive splits trees (BEST) A new classification procedure that can handle missing values by using data partitioning and better accuracy (Beaulac and Rosenthal, 2020) Boosted trees Able to handle missingness from data fusion, deterministic or distribution-free data sets (D'Ambrosio et al, 2012) C4.5 Generalized approach that uses index measure in the estimation of missing values (Madhu and Rajinikanth, 2012) Classification and regression trees (CART) A robust method to deal with different missing value types (Nikfalazar et al, 2020) Decision trees and forests A higher quality of imputation using similarity and correlations…”
Section: K-meansmentioning
confidence: 99%
“…Efficient missing value imputation (Patil et al, 2010) Technique is generalized and can be utilized for many data sets (Ishay and Herman, 2015) Impute missing values and build clusters as a unified integrated process (Abdallah and Shimshoni, 2016) K-means þ radial basis function (RBF) Faster convergence speed, higher stability, accuracy (Shi et al, 2018) Local least squares Local data clustering being incorporated for improved quality and efficiency (Keerin et al, 2013) Multiple kernel density Accuracy and efficiency (Liao et al, 2018) Rough set Handles the uncertainty and vagueness existing in data sets (Amiri and Jensen, 2016) Less computational complexity (Azam et al, 2018) Overcome the problem of crispness (Raja et al, 2019) (continued ) Shell neighbor Fills in an incomplete instance in a given data set by only using its left and right nearest neighbors with respect to each factor (attribute) and generalized to deal with data sets of mixed attributes (Zhang, 2011) Sliding window Applicable for IoT devices' data (Kolomvatsos et al, 2019) Soft cluster Overcomes the problems of inconsistency (Raja and Thangavel, 2016) Decision tree Branch-exclusive splits trees (BEST) A new classification procedure that can handle missing values by using data partitioning and better accuracy (Beaulac and Rosenthal, 2020) Boosted trees Able to handle missingness from data fusion, deterministic or distribution-free data sets (D'Ambrosio et al, 2012) C4.5 Generalized approach that uses index measure in the estimation of missing values (Madhu and Rajinikanth, 2012) Classification and regression trees (CART) A robust method to deal with different missing value types (Nikfalazar et al, 2020) Decision trees and forests A higher quality of imputation using similarity and correlations…”
Section: K-meansmentioning
confidence: 99%