Proceedings of the 11th International Conference on World Wide Web 2002
DOI: 10.1145/511446.511500
|View full text |Cite
|
Sign up to set email alerts
|

Probabilistic question answering on the web

Abstract: Web-based search engines such as Google and NorthernLight return documents that are relevant to a user query, not answers to user questions. We have developed an architecture that augments existing search engines so that they support natural language question answering. The process entails five steps: query modulation, document retrieval, passage extraction, phrase extraction, and answer ranking. In this paper we describe some probabilistic approaches to the last three of these stages. We show how our techniqu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
73
2

Year Published

2004
2004
2022
2022

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 90 publications
(77 citation statements)
references
References 12 publications
(2 reference statements)
0
73
2
Order By: Relevance
“…We achieved mean reciprocal rank of the answer (MRR) of 0.314, which is smaller than reported in [2], but seemingly better (although not directly comparable due to different testing methodologies) to the results reported with the other approaches [3][1] that were completely trainable (not relying on hand-crafted rules). We believe that the lower results are due to our use of AltaVista instead of Google and due to the fact that we did not use any manually crafted semantic filters.…”
Section: Empirical Evaluation and Conclusioncontrasting
confidence: 51%
“…We achieved mean reciprocal rank of the answer (MRR) of 0.314, which is smaller than reported in [2], but seemingly better (although not directly comparable due to different testing methodologies) to the results reported with the other approaches [3][1] that were completely trainable (not relying on hand-crafted rules). We believe that the lower results are due to our use of AltaVista instead of Google and due to the fact that we did not use any manually crafted semantic filters.…”
Section: Empirical Evaluation and Conclusioncontrasting
confidence: 51%
“…In fact the main objective of the effort was how the system carried out its ranking processes according to a set of techniques. These techniques included the proximity of the algorithm use and the probabilistic phrase ranking technique used by the system [21].…”
Section: Literature Review Andmentioning
confidence: 99%
“…Rules created for a specific question ontology must be re-tailored before being applied to different ontologies. In addition to TREC QA track systems, several web-based QA systems have relied on such rules with limited success [26]. Therefore, there is a need for more robust systems that can easily be adapted to handle new data sets and question ontologies.…”
Section: Related Workmentioning
confidence: 99%