Proceedings of BioNLP 15 2015
DOI: 10.18653/v1/w15-3817
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Detection of Answers to Research Questions from Medline Abstracts

Abstract: Given a set of abstracts retrieved from a search engine such as Pubmed, we aim to automatically identify the claim zone in each abstract and then select the best sentence(s) from that zone that can serve as an answer to a given query. The system can provide a fast access mechanism to the most informative sentence(s) in abstracts with respect to the given query.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 14 publications
0
6
0
Order By: Relevance
“…The ACRES system (Summerscales et al, 2011) produces summaries of several trial characteristic, and was trained on 263 annotated abstracts. Hinting at more challenging tasks that can build upon foundational information extraction, Alamri and Stevenson (2015) developed methods for detecting contradictory claims in biomedical papers. Their corpus of annotated claims contains 259 sentences (Alamri and Stevenson, 2016).…”
Section: Related Workmentioning
confidence: 99%
“…The ACRES system (Summerscales et al, 2011) produces summaries of several trial characteristic, and was trained on 263 annotated abstracts. Hinting at more challenging tasks that can build upon foundational information extraction, Alamri and Stevenson (2015) developed methods for detecting contradictory claims in biomedical papers. Their corpus of annotated claims contains 259 sentences (Alamri and Stevenson, 2016).…”
Section: Related Workmentioning
confidence: 99%
“…Table 1 shows the performance results of the claim selection component. The authors in [1] relied on lexical similarity and a Z-score that computes the sentence relevance, with respect to the distribution of similarity scores of other sentences across the dataset. However, While this scoring function contributes to precision, it also affects the recall performance metric.…”
Section: Claim Extraction Resultsmentioning
confidence: 99%
“…The work by Shi and Bei (2019) is one of the few exceptions that target this challenge and propose a pipeline to extract health-related claims from headlines of healththemed news articles. The majority of other argument mining approaches for the biomedical domain focus on research literature (Blake, 2010;Alamri and Stevenson, 2015;Alamri and Stevensony, 2015;Achakulvisut et al, 2019;Mayer et al, 2020).…”
Section: Claim Detectionmentioning
confidence: 99%