Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
1996
DOI: 10.1016/s0306-4573(96)85005-9
|View full text |Cite
|
Sign up to set email alerts
|

Learning syntactic rules and tags with genetic algorithms for information retrieval and filtering: An empirical basis for grammatical rules

Abstract: The grammars of natural languages may be learned by using genetic algorithms that reproduce and mutate grammatical rules and part-of-speech tags, improving the quality of later generations of grammatical components. Syntactic rules are randomly generated and then evolve; those rules resulting in improved parsing and occasionally improved retrieval and filtering performance are allowed to further propagate. The LUST system learns the characteristics of the language or sublanguage used in document abstracts by l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2001
2001
2016
2016

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 33 publications
(19 citation statements)
references
References 17 publications
(24 reference statements)
0
19
0
Order By: Relevance
“…During the intervening years, starting in 1989, Wong and Yao applied a probability distribution to an information retrieval model [10]; in 1995, Amanda Spink applied term relevance feedback [11], and in 1996, Losee applied "syntactical rules and tags" [12].…”
Section: Background Literature On Filtering Approachesmentioning
confidence: 99%
See 1 more Smart Citation
“…During the intervening years, starting in 1989, Wong and Yao applied a probability distribution to an information retrieval model [10]; in 1995, Amanda Spink applied term relevance feedback [11], and in 1996, Losee applied "syntactical rules and tags" [12].…”
Section: Background Literature On Filtering Approachesmentioning
confidence: 99%
“…Early work in this area was done by Robertson and Jones in their 1976 paper which explored the use of "statistical techniques for exploiting relevance information to weight search terms" [8]. This work was still relevant 25 years later and extended in their 2000 paper reporting on a series of experiments using a probabilistic model [9].During the intervening years, starting in 1989, Wong and Yao applied a probability distribution to an information retrieval model [10]; in 1995, Amanda Spink applied term relevance feedback [11], and in 1996, Losee applied "syntactical rules and tags" [12].Since 2000, there have also been several experiments reporting the use of incorporated search behavior for relevance feedback [13], and exploring term dependencies [14] and Bayesian networks [15].We shift our focus to the experiments reported in this paper. We extend the concept of relevance feedback techniques and term weighting methods as mentioned above, and also incorporate a behavioral approach, by offering the user the opportunity to divide their search model into inclusionary terms and exclusionary terms.…”
mentioning
confidence: 99%
“…In cases where parts-of-speech are being considered, the parts-of-speech for each term in the document phrase must match the parts-of-speech found in the terms in a matching query phrase. These tags may be assumed to evolve over time; however, here they are treated as a predetermined set (Lankhorst, 1995;Losee, 1996a). Partof-speech tagging of terms in queries and documents was done using the Brill part-of-speech tagger (Brill, 1994).…”
Section: Experimental Data and Measurementmentioning
confidence: 99%
“…In computational linguistics they were employed to improve the performance of anaphora resolution methods [9,10], resolve anaphora resolution [11], study optimal vowel and tonal systems [12], build bilingual dictionaries [13], improve queries for information retrieval [14], and learn of syntactic rules [15]. For applications in other fields than computational linguistics a good overview can be found in [16].…”
Section: Brief Introduction To Genetic Algorithmsmentioning
confidence: 99%