2019 8th Brazilian Conference on Intelligent Systems (BRACIS) 2019
DOI: 10.1109/bracis.2019.00073
|View full text |Cite
|
Sign up to set email alerts
|

Towards Meta-Learning for Multi-Target Regression Problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 29 publications
0
6
0
Order By: Relevance
“…Spyromitros-Xioufis et al ( 2016) introduced the stacked ST (SST) and ensemble RC (ERC). These methods can be computationally complex with high memory costs (Mastelini et al 2019). As Aguiar et al (2019) state, choosing the most suitable approach needs previous testing and depends on the task.…”
Section: ) Multi-step Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…Spyromitros-Xioufis et al ( 2016) introduced the stacked ST (SST) and ensemble RC (ERC). These methods can be computationally complex with high memory costs (Mastelini et al 2019). As Aguiar et al (2019) state, choosing the most suitable approach needs previous testing and depends on the task.…”
Section: ) Multi-step Approachmentioning
confidence: 99%
“…These methods can be computationally complex with high memory costs (Mastelini et al 2019). As Aguiar et al (2019) state, choosing the most suitable approach needs previous testing and depends on the task. The methods cited here are computationally expensive.…”
Section: ) Multi-step Approachmentioning
confidence: 99%
“…They are directly extracted from data with significant computational costs using methods with significant complexity but without hyperparameters, requiring significant amounts of data. They are based on discrete attributes and include properties such as entropy which captures the amount of information and complexity of data, mutual information which mostly determines the relation of attributes and target class used for classification problems [25]. They are used for presenting different behavior patterns [29], for performing high quality recommendations, and for representing inner correlations between different classes [30].…”
Section: Meta-featuresmentioning
confidence: 99%
“…So far, domain specific meta-features are not used in the anomaly detection domain. They are only used in text classification, where domain-based knowledge present vocabulary length, words overlap, number of text categories, corpus hardness, domain broadness, and similar [25], [31].…”
Section: Meta-featuresmentioning
confidence: 99%
“…Another potential solution is to tackle the problem through a divide-and-conquer paradigm, since no single ML algorithm can achieve full coverage over all possible contexts, discrepancies and noises. This is also called the 'no free lunch' theorem [8], [9]. Instead, one could design different ML models where each performs sufficient enough in one context, but lacks performance in others.…”
Section: Introductionmentioning
confidence: 99%