2021
DOI: 10.1109/access.2021.3084623
|View full text |Cite
|
Sign up to set email alerts
|

Individual Attribute Selection Using Information Gain Based Distance for Group Classification of Elderly People With Hypertension

Abstract: Attribute selection is the process of selecting relevant attributes being used in model construction to enhance model accuracy. For general medical oriented classification applications, classical attribute selection methods principally select common attributes in the dataset for all individuals. The idea of using individual attributes is proposed in this study to represent the difference among individuals for self-diagnosis. Consequently, this study proposes a new attribute selection method, called information… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
2
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 32 publications
0
2
0
2
Order By: Relevance
“…Information gain is still applied to prioritize factors for AWOD. Generally, information gain measures reductions in entropy [32] and determines irrelevant attributes of a dataset [33][34][35][36], including individual factors [37] by considering information gain levels after reducing entropy. For AWOD, the information gain is applied to determine both significant and insignificant factors for individuals.…”
Section: B Prediction Methodsmentioning
confidence: 99%
“…Information gain is still applied to prioritize factors for AWOD. Generally, information gain measures reductions in entropy [32] and determines irrelevant attributes of a dataset [33][34][35][36], including individual factors [37] by considering information gain levels after reducing entropy. For AWOD, the information gain is applied to determine both significant and insignificant factors for individuals.…”
Section: B Prediction Methodsmentioning
confidence: 99%
“…Nilai yang ditampilkan dari operator weight by information gain merupakan nilai entropy. Entropy merupakan ukuran ketidakmurnian atau ketidak teraturan data, dimana nilainya berada pada range 0-1 [11] Feature selection merupakan sebuah teknik penting dalam data preprocessing [12] dengan cara mengeliminasi fitur atau mereduksi fitur yang bertujuan untuk meningkatkan akurasi klasifikasi [13]. Information gain merupakan teknik seleksi fitur berbasis filter [14] Information gain menggunakan perangkingan atribut dan mengurangi noise yang disebabkan oleh fitur yang tidak relevan.…”
Section: Gambar 2 Data Mentahunclassified
“…Penggunaan feature selection weight by information gain dapat mengurangi waktu yang dibutuhkan dalam proses pembuatan pohon keputusan serta memberikan performance yang lebih tinggi [13] Penelitian ini juga melakukan pengujian dengan data imbalance sebanyak 205 dataset dengan label tidak lulus 41 dan label lulus 164. Performance yang dihasilkan hampir sama tetapi perbedaan yang paling terlihat adalah nilai recall yang sangat rendah yaitu hanya sebesar 50%.…”
Section: Deploymentunclassified
“…The order of TFS features is effective in the construction process of DenseNet and LSTM classifiers. Therefore, information gain [31] was applied to rank the features. Table 4 shows the description of the ranked features of TSF database.…”
Section: Tsf Database Creationmentioning
confidence: 99%