2010
DOI: 10.2478/v10143-010-0052-4
|View full text |Cite
|
Sign up to set email alerts
|

Decision Tree Classifiers in Bioinformatics

Abstract: -This paper presents a literature review of articles related to the use of decision tree classifiers in gene microarray data analysis published in the last ten years. The main focus is on researches solving the cancer classification problem using single decision tree classifiers (algorithms C4.5 and CART) and decision tree forests (e.g. random forests) showing strengths and weaknesses of the proposed methodologies when compared to other popular classification methods. The article also touches the use of decisi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2011
2011
2021
2021

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 18 publications
0
11
0
Order By: Relevance
“…Accurate conventional end procedure is a significant tree with too huge, overfitted and trivial trees, underfitted and suffers loss in accuracy. Algorithms have assembled overfitting strategies, labelled trimming, classifying new instances by leading the tree basis down a leaf, with respect to the examination result along the pathway [26]. Competent models are discovered using decision tree classifiers and ensembles, with unbalanced varying trained datasets, with resultant models totally unalike.…”
Section: Decision Treesmentioning
confidence: 99%
“…Accurate conventional end procedure is a significant tree with too huge, overfitted and trivial trees, underfitted and suffers loss in accuracy. Algorithms have assembled overfitting strategies, labelled trimming, classifying new instances by leading the tree basis down a leaf, with respect to the examination result along the pathway [26]. Competent models are discovered using decision tree classifiers and ensembles, with unbalanced varying trained datasets, with resultant models totally unalike.…”
Section: Decision Treesmentioning
confidence: 99%
“…Most of the algorithms have a mechanism built in that deals with overfitting; it is called pruning. Each new instance is classified by navigating them from the root of the tree down lo a leaf, according to the outcome of the tests along the path [22]. Although decision trees produce efficient models, they are unstable -if the training data sets differ only slightly, the resulting models can be completely different for those two sets.…”
Section: Iiiiiiii Decision Treesmentioning
confidence: 99%
“…Each new instance is classified by navigating them from the root of the tree down to a leaf, according to the outcome of the tests along the path [20,21].…”
Section: Deterministic Classifiers Overviewmentioning
confidence: 99%