The Handbook of Computational Linguistics and Natural Language Processing 2010
DOI: 10.1002/9781444324044.ch6
|View full text |Cite
|
Sign up to set email alerts
|

Memory‐Based Learning

Abstract: Memory-Based Learning 5 2 Memory-Based Language Processing MBL, and its application to NLP, which we will call Memory-Based Language Processing (MBLP) here, is based on the idea that learning and processing are two sides of the same coin. Learning is the storage of examples in memory, and processing is similarity-based reasoning with these stored examples. The approach is inspired by work in pre-Chomskyan linguistics, categorization psychology, and statistical pattern recognition. The main claim is that, contr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0
1

Year Published

2011
2011
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(11 citation statements)
references
References 91 publications
0
7
0
1
Order By: Relevance
“…In fact, however, this is also the case for many exemplar models (e.g. Daelemans & van den Bosch, 2010, TiMBL; Regier, 2005, LEX; many versions of the GCM), which use attentional feature weights to learn to preferentially weight the cues that best discriminate different outcomes, in a manner identical to discriminative learning models. Consequently, a difference between the approaches can be seen only when we consider ‘pure’ exemplar models that do not posit feature weights (e.g.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In fact, however, this is also the case for many exemplar models (e.g. Daelemans & van den Bosch, 2010, TiMBL; Regier, 2005, LEX; many versions of the GCM), which use attentional feature weights to learn to preferentially weight the cues that best discriminate different outcomes, in a manner identical to discriminative learning models. Consequently, a difference between the approaches can be seen only when we consider ‘pure’ exemplar models that do not posit feature weights (e.g.…”
Section: Resultsmentioning
confidence: 99%
“…The Tilburg Memory Based Learner (TiMBL; Daelemans & van den Bosch, 2010; Keuleers, 2008) is similar in its use of a form of feature weighting (information gain) and a decay function. It differs in that it considers not all stored exemplars, just k nearest neighbours (set as a model parameter) and, in most implementations, does not represent token frequency in any way (though the English past-tense model of Van Noord & Spenader, 2015, is an exception).…”
Section: Morphologically Inflected Wordsmentioning
confidence: 99%
“…In other work we and others have shown that memory-based learning offers competitive state-of-the-art performance, so that it can be used in most practical natural language processing situations in lieu of any other state-of-the-art machine-learning algorithm (for an overview, see Daelemans and Van den Bosch, 2009). …”
Section: How Friendly Are Linguistic Neighbourhoods?mentioning
confidence: 97%
“…In sum, through experimentation we have shown that although Saussurean analogical reasoning, a form of selective abductive reasoning, has no built-in safety measures to avoid erroneous outcomes, MBLP can offer good performance on unseen data in many natural language processing tasks by using the principle directly, offering the best possible performance when all of the training data is retained in memory. In other work we and others have shown that memory-based learning offers competitive state-of-the-art performance, so that it can be used in most practical natural language processing situations in lieu of any other state-of-the-art machine-learning algorithm (for an overview, see Daelemans and Van den Bosch, 2009).…”
Section: How Friendly Are Linguistic Neighbourhoods?mentioning
confidence: 99%
“…Frog was originally developed for Dutch and integrates several NLP modules that perform word and sentence boundary detection, POS tag and lemma labeling, named entity recognition and morphological and syntactic analysis. e majority of the modules are based on memory-based machine learning algorithms [6], a technique proposed to learn NLP classi cation problems with as its de ning characteristic that it stores in memory all available instances of a task, and that it extrapolates from the most similar instances in memory to deal with unseen cases.…”
Section: Frogmentioning
confidence: 99%