2017
DOI: 10.1016/j.patcog.2017.05.007
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Multi-label Classification using Fully Associative Ensemble Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
4
4
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 72 publications
(29 citation statements)
references
References 24 publications
0
27
0
Order By: Relevance
“…Information from ancestors and closely related siblings in the hierarchy may provide useful information for protein function prediction, including through heterogeneous ensembles. Previous work has utilized this information for advancing individual and ensemble PFP algorithms 37 39 , and similar ideas can be used to improve heterogeneous ensembles as well.…”
Section: Discussionmentioning
confidence: 99%
“…Information from ancestors and closely related siblings in the hierarchy may provide useful information for protein function prediction, including through heterogeneous ensembles. Previous work has utilized this information for advancing individual and ensemble PFP algorithms 37 39 , and similar ideas can be used to improve heterogeneous ensembles as well.…”
Section: Discussionmentioning
confidence: 99%
“…Hierarchical Multi-label Classification (HMC) is also related to the proposed framework. In HMC, each sample has more than one label and all these labels are organized hierarchically in a tree or Direct Acyclic Graph (DAG) [41,42]. Hierarchical information in tree and DAG structures is used to improve classification performance [43,44].…”
Section: Related Workmentioning
confidence: 99%
“…During the training phase, the complexity of computing and storing K(z, z) is significant for large size problems. Therefore, a random sample-selection technique introduced in Zhang et al [50] can be applied to reduce the kernel complexity of large-scale datasets. The assumption behind this is to select a small number of samples that could represent the distribution of a large-scale dataset.…”
Section: Fully Associative Learningmentioning
confidence: 99%