2006
DOI: 10.21236/ada456308
|View full text |Cite
|
Sign up to set email alerts
|

A Menagerie of Tracks at Maryland: HARD, Enterprise, QA, and Genomics, Oh My!

Abstract: This year, the University of Maryland participated in four separate tracks: HARD, enterprise, question answering, and genomics. Our HARD experiments involved a trained intermediary who searched for documents on behalf of the user, created clarification forms manually, and exploited user responses accordingly. The aim was to better understand the nature of single-iteration clarification dialogs and to develop an "ontology of clarifications" that can be leveraged to guide system development. For the enterprise t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
1
0

Year Published

2006
2006
2019
2019

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 8 publications
1
1
0
Order By: Relevance
“…The result is also comparable with the result of a human manual run, which attained a F3 score of 0.299 on the same question set [9]. This result is confirmation that interesting nuggets does indeed play a significant role in picking up definitional answers, and may be more vital than using information finding lexical patterns.…”
Section: Informativeness Vs Interestingnesssupporting
confidence: 79%
“…The result is also comparable with the result of a human manual run, which attained a F3 score of 0.299 on the same question set [9]. This result is confirmation that interesting nuggets does indeed play a significant role in picking up definitional answers, and may be more vital than using information finding lexical patterns.…”
Section: Informativeness Vs Interestingnesssupporting
confidence: 79%
“…To tackle the problem of automatic expansions within the context of TREC-TS, we need to determine if the information contained in a nugget is also contained within a given sentence. This problem has been previously studied in the domain of question answering [7,25,26,28,[48][49][50] and prior works have also examined its application within the information retrieval domain [16,30,[38][39][40][41] for the purposes of automatic expansion to address the challenges imposed by incomplete relevance assessments. A general assumption made by these approaches is that the relevance of a document is defined by presence of an "information nugget" relevant to the information need.…”
Section: Nugget-based Expansionmentioning
confidence: 99%