2006
DOI: 10.1007/11733836_41
|View full text |Cite
|
Sign up to set email alerts
|

RAF: An Activation Framework for Refining Similarity Queries Using Learning Techniques

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
13
0

Year Published

2007
2007
2014
2014

Publication Types

Select...
2
2
2

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(13 citation statements)
references
References 12 publications
0
13
0
Order By: Relevance
“…Yet, before [12], [5], [11], [27] and [10], Motro [23], [24] has discussed the query modification techniques for supporting vague queries and the approaches for explaining empty answer to a query, respectively. In user feedback-based query refinement techniques, only false positives (why) feedback have been emphasized in both database and information extraction areas previously [20], [19]. For example, in [20], Ma et al model user feedback query refinement for both learning the structure of the query as well as learning the relative importance of query components, but they collect only false positive feedback from users.…”
Section: Related Workmentioning
confidence: 97%
See 1 more Smart Citation
“…Yet, before [12], [5], [11], [27] and [10], Motro [23], [24] has discussed the query modification techniques for supporting vague queries and the approaches for explaining empty answer to a query, respectively. In user feedback-based query refinement techniques, only false positives (why) feedback have been emphasized in both database and information extraction areas previously [20], [19]. For example, in [20], Ma et al model user feedback query refinement for both learning the structure of the query as well as learning the relative importance of query components, but they collect only false positive feedback from users.…”
Section: Related Workmentioning
confidence: 97%
“…In user feedback-based query refinement techniques, only false positives (why) feedback have been emphasized in both database and information extraction areas previously [20], [19]. For example, in [20], Ma et al model user feedback query refinement for both learning the structure of the query as well as learning the relative importance of query components, but they collect only false positive feedback from users. In [19], Liu et al collect false positives (why tuples), again identified by users, to modify the initial rules in information extraction settings.…”
Section: Related Workmentioning
confidence: 97%
“…In user feedback-based query refinement techniques, only false positives (why) feedback have been emphasized in both database and information extraction areas before [14], [15]. In [14], Ma et al model user feedback query refinement for both learning the structure of the query as well as learning the relative importance of query components, but they collect VII.…”
Section: A Effectivenessmentioning
confidence: 98%
“…In [14], Ma et al model user feedback query refinement for both learning the structure of the query as well as learning the relative importance of query components, but they collect VII. RELATED WORK AND DISCUSSION Previous studies [5], [4], [6], [2] and [3] have addressed …”
Section: A Effectivenessmentioning
confidence: 99%
“…Our work significantly differs from these retrieval and refinement approaches (e.g., [4,7]) since we focus on the existing query formulation, and attempts to give a user a sense of the query completion as we dynamically prune the search space based on the user feedback. In addition, our refinement techniques are built on top of the dynamic search space.…”
Section: Related Workmentioning
confidence: 99%