2015
DOI: 10.7763/ijmlc.2015.v5.517
|View full text |Cite
|
Sign up to set email alerts
|

Filter Based Feature Selection Methods for Prediction of Risks in Hepatitis Disease

Abstract: Abstract-Recently, large amount of data is widely available in information systems and data mining has attracted a big attention to researchers to turn such data into useful knowledge. This implies the existence of low quality, unreliable, redundant and noisy data which negatively affect the process of observing knowledge and useful pattern. Therefore, researchers need relevant data from huge records using feature selection methods. Feature selection is the process of identifying the most relevant attributes a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
43
0
5

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 73 publications
(48 citation statements)
references
References 15 publications
0
43
0
5
Order By: Relevance
“…Pinar Yildirim [12] study found that feature selection methods can improve learning algorithms performance. The experiments are performed on four classification algorithms using renowned hepatitis data set and the presented algorithm is evaluated.…”
Section: Related Workmentioning
confidence: 99%
“…Pinar Yildirim [12] study found that feature selection methods can improve learning algorithms performance. The experiments are performed on four classification algorithms using renowned hepatitis data set and the presented algorithm is evaluated.…”
Section: Related Workmentioning
confidence: 99%
“…Embedded methods conduct feature selection during learning phase like artificial neural networks. Filters assess each feature independent from the algorithm model, rank the features after assessment and take the greater ones [9]. This assessment may be done using entropy for example [10].…”
Section: Introductionmentioning
confidence: 99%
“…Feature selection helps the data mining algorithm zero in on relevant features so that the hypothesis space can be reduced. [1] Basically the feature selection is done in two ways; one is to rank the features based on certain criteria and the top K features are selected and the other is to select a minimal feature subset without any decrease of the learning performance.…”
Section: Introductionmentioning
confidence: 99%