2021
DOI: 10.1007/s11229-021-03233-1
|View full text |Cite
|
Sign up to set email alerts
|

The no-free-lunch theorems of supervised learning

Abstract: The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification. But how could this leave room for a learning theory, that shows that some algorithms are better than others? Drawing parallels to the philosophy of induction, we point out that the no-free-lunch results presuppose a conception of learning algorithms as purely data-driven. On this conception, every algorithm must have an inherent inductive bias, that wants justification. We argue t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 38 publications
(21 citation statements)
references
References 58 publications
(76 reference statements)
0
13
0
Order By: Relevance
“…Therefore, the authors suggest that future research should provide insights into the selection of different decomposition strategies. Note that this proposal is not intended to derive an optimal policy for all problems and scenarios, which is impossible according to the so-called “No Free Lunch Theorem” [ 113 ]. There is no “super algorithm” that works best for all problems.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, the authors suggest that future research should provide insights into the selection of different decomposition strategies. Note that this proposal is not intended to derive an optimal policy for all problems and scenarios, which is impossible according to the so-called “No Free Lunch Theorem” [ 113 ]. There is no “super algorithm” that works best for all problems.…”
Section: Discussionmentioning
confidence: 99%
“…Thus, for every learning algorithm, there exists a task on which it fails, as no learning algorithm can generalize to all possible realities while having only observed some instances of the realities. To that end, NFL has also been discussed in relation to more fundamental problems in the philosophy of induction, e.g., in connection to Hume's problem of induction (Sterkenburg and Grünwald, 2021;Schurz, 2017) or Occham's razor (Lattimore and Hutter, 2013). Hume famously advanced skepticism against the very justification of induction, arguing that deductive reasoning alone cannot secure the validity of inductive inference; and neither can induction, due to circularity, provide non-deductive grounds for itself (Hume, 1739).…”
Section: Weak and Strong Sample Complexitymentioning
confidence: 99%
“…While they may obstruct the prospects of a perfect universal moral learner, it does not stop us from pursuing weaker yet reasonable alternatives that are practically viable. Instead of seeking a global and model-independent justification for why inductive inference seems to work, we can opt for local and model-relative justifications in order to explain why some learning algorithms work better than others (Sterkenburg and Grünwald, 2021). However, it should be stressed that any such alternative would inevitably entail some form of inductive bias; assumptions that we exploit to enable and foster learnability.…”
Section: Weak and Strong Sample Complexitymentioning
confidence: 99%
“…Wolpert has argued that NFL "can be viewed as a formalization and elaboration of concerns about the legitimacy of inductive inference, concerns that date back to David Hume (if not earlier)" (Wolpert, Wolpert (2012), 1). Philosophers have examined the relationship between NFL and inductive inference (Schurz, 2017) and have argued extensively against Wolpert's interpretation (Sterkenburg & Grünwald, 2021). Because Dotan's argument relies upon NFL, it would be useful to review the two main NFL results in search and optimization and in supervised learning in detail along with the philosophical discussion.…”
Section: No Free Lunch Theorem In Detailmentioning
confidence: 99%