1999
DOI: 10.1006/inco.1998.2784
|View full text |Cite
|
Sign up to set email alerts
|

Incremental Concept Learning for Bounded Data Mining

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

10
59
0

Year Published

2000
2000
2014
2014

Publication Types

Select...
5
3

Relationship

6
2

Authors

Journals

citations
Cited by 74 publications
(69 citation statements)
references
References 21 publications
10
59
0
Order By: Relevance
“…Bounded example memory was considered by Osherson, Stob and Weinstein [28]; Lange and Zeugmann [25] extended this study. Wiehagen [33] and Lange and Zeugmann [25] introduced and studied feedback learning; Case, Jain, Lange and Zeugmann [8] quantified the amount of feedback queries per round.…”
Section: Learning With Feedback and Memory Limitationsmentioning
confidence: 99%
“…Bounded example memory was considered by Osherson, Stob and Weinstein [28]; Lange and Zeugmann [25] extended this study. Wiehagen [33] and Lange and Zeugmann [25] introduced and studied feedback learning; Case, Jain, Lange and Zeugmann [8] quantified the amount of feedback queries per round.…”
Section: Learning With Feedback and Memory Limitationsmentioning
confidence: 99%
“…Both these concepts were reformalized (the former named n-feedback learning, and the latter named n-memory learning) and thoroughly studied and discussed in [CJLZ99]. Motivation for these sorts of learnability models, as discussed in [CJLZ99], comes from the rapidly developing field of knowledge discovery in databases, which includes, in particular, data mining, knowledge extraction, information discovery, data pattern processing, information harvesting, etc. Many of these tasks represent interactive incremental iterative processes (cf., e.g., [BA96] and [FPSS96]), working on huge data sets, finding regularities, and verifying them on small samples of the overall data.…”
Section: Introductionmentioning
confidence: 99%
“…Many of these tasks represent interactive incremental iterative processes (cf., e.g., [BA96] and [FPSS96]), working on huge data sets, finding regularities, and verifying them on small samples of the overall data. While the authors in [CJLZ99] explore the aforementioned formalizations of "looking back" at small (uniformly limited by some upper bound n) portions of input data in the context of regular iterative learning, we, in this research, allow the learner to test with the teacher if conjectures do not contain data in excess of the target language. Our learners may also be allowed to memorize some bounds derived from the input data seen so far -in the form of the maximal element or the length of input seen so far (the latter type of additional information for iterative learners was first considered in [CM08]).…”
Section: Introductionmentioning
confidence: 99%
“…This has motivated the analysis of so-called incremental learning, cf. [4,5,[7][8][9]12], where in each step of the learning process, the learner has access only to a limited number of examples. Thus, in each step, its hypothesis can be built upon these examples and its former hypothesis, only.…”
Section: Introductionmentioning
confidence: 99%