2019
DOI: 10.1109/access.2018.2887165
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Label Question Classification for Factoid and List Type Questions in Biomedical Question Answering

Abstract: Biomedical experts and bio-curators are unable to quickly find short and precise information using typical search engines as the amount of biomedical literature is increasing exponentially. The research community is focusing on biomedical question answering (QA) systems so that anyone can find precise information nuggets from the massive amount of biomedical literature. Generally, the user queries fall under different categories such as factoid, list, yes/no, or summary. The existing state-of-the-art question … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 29 publications
0
5
0
Order By: Relevance
“…However, several of the included studies arose from the BioASQ 5b 44 and 6b 45 shared tasks, which aimed to answer 4 types of questions (yes/no, factoids, list, and summary questions) and had 2 phases: information retrieval and exact answer production. Three studies arising from BioASQ 53 , 54 , 65 evaluated QA systems with a neural component, while 5 studies 52–54 , 57 , 65 evaluated QA systems that relied only on rule-based or classical ML components (eg, support vector machines). The neural components encoded questions and passages with a recurrent neural network (RNN) that were then used to create intermediate representations before answers were generated with additional layers.…”
Section: Resultsmentioning
confidence: 99%
“…However, several of the included studies arose from the BioASQ 5b 44 and 6b 45 shared tasks, which aimed to answer 4 types of questions (yes/no, factoids, list, and summary questions) and had 2 phases: information retrieval and exact answer production. Three studies arising from BioASQ 53 , 54 , 65 evaluated QA systems with a neural component, while 5 studies 52–54 , 57 , 65 evaluated QA systems that relied only on rule-based or classical ML components (eg, support vector machines). The neural components encoded questions and passages with a recurrent neural network (RNN) that were then used to create intermediate representations before answers were generated with additional layers.…”
Section: Resultsmentioning
confidence: 99%
“…• Question answering: BERN can recognize biomedical named entities in questions and passages in question answering tasks such as BioASQ Task B [63], [64], and help improve performance, especially on ''what'' and ''which'' questions by classifying whether a span in a passage is an entity or not.…”
Section: A Use Casesmentioning
confidence: 99%
“…Due to some restrictions, however, only definitional questions can be solved. Another research project focused on retrieval of answers from biomedical literature through narrowing down the candidate answer space by question classification and distributing a higher rank to the correct answers [ 10 ]. This research still suffered from some troublesome problems [ 7 , 29 ], such as the need for a clear factoid and list type.…”
Section: Related Workmentioning
confidence: 99%
“…There have been several investigations concerning improvements of the query processing phase. For example, Cao et al [ 9 ], Wasim et al [ 10 ] and Abacha et al [ 11 ] have employed question classifying approaches, with semantic information obtained from the UMLS resources. However, some researchers have noted that these medical QA approaches have limitations in terms of the types and formats of questions that they can process [ 12 ].…”
Section: Introductionmentioning
confidence: 99%