2020
DOI: 10.1007/978-3-030-44584-3_42
|View full text |Cite
|
Sign up to set email alerts
|

Making Learners (More) Monotone

Abstract: Learning performance can show non-monotonic behavior. That is, more data does not necessarily lead to better models, even on average. We propose three algorithms that take a supervised learning model and make it perform more monotone. We prove consistency and monotonicity with high probability, and evaluate the algorithms on scenarios where non-monotone behaviour occurs. Our proposed algorithm MT HT makes less than 1% non-monotone decisions on MNIST while staying competitive in terms of error rate compared to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 17 publications
0
6
0
Order By: Relevance
“…Except for trehalulose, all other α‐D‐glucosyl‐D‐fructoses reported a testing accuracy of 1.0000. The learning curve is derived during the evaluation of a model when an increasing amount of events randomly selected from the whole training set was fed into the program for training [44] . According to the learning curve results, a minimum of 648 training data events, equivalent to about 100 events per analyte, is sufficient to achieve a 0.98 overall accuracy of the model (Figure 4c).…”
Section: Resultsmentioning
confidence: 99%
“…Except for trehalulose, all other α‐D‐glucosyl‐D‐fructoses reported a testing accuracy of 1.0000. The learning curve is derived during the evaluation of a model when an increasing amount of events randomly selected from the whole training set was fed into the program for training [44] . According to the learning curve results, a minimum of 648 training data events, equivalent to about 100 events per analyte, is sufficient to achieve a 0.98 overall accuracy of the model (Figure 4c).…”
Section: Resultsmentioning
confidence: 99%
“…The teacher's method selection aims to make it easier for students to understand the lesson as determined by the situation and condition. Furthermore, to support the statement Viering et al (2020) said that a method could be monotonous and tedious if the teacher was unable to create an exciting learning process.…”
Section: Discussionmentioning
confidence: 99%
“…This allows them to determine when to switch to a model trained with more data. In contrast to [198], they argue that their second algorithm does not learn slower, as its generalization bound coincides with a known lower bound of regular supervised learning.…”
Section: Monotonicity: a General Fix?mentioning
confidence: 90%
“…One may wonder, however, whether generally applicable approaches exist that can turn any learner into a monotone one. A first attempt is made in [198] which proposes a wrapper that, with high probability, makes any classifier monotone in terms of the the error rate. The main idea is to consider n as a variable over which model selection is performed.…”
Section: Monotonicity: a General Fix?mentioning
confidence: 99%