Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing 2020
DOI: 10.18653/v1/2020.bionlp-1.2
|View full text |Cite
|
Sign up to set email alerts
|

Sequence-to-Set Semantic Tagging for Complex Query Reformulation and Automated Text Categorization in Biomedical IR using Self-Attention

Abstract: Novel contexts, comprising a set of terms referring to one or more concepts, may often arise in complex querying scenarios such as in evidence-based medicine (EBM) involving biomedical literature. These may not explicitly refer to entities or canonical concept forms occurring in a fact-based knowledge source, e.g. the UMLS ontology. Moreover, hidden associations between related concepts meaningful in the current context, may not exist within a single document, but across documents in the collection. Predicting… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…For more details, see Appendix B. Our bi-encoder achieves mean precision@10 score of 45.67 on TREC 2016 data in 5-fold cross-validation, comparable to stateof-the-art results (Das et al, 2020).…”
Section: Dense Retrieval Modelmentioning
confidence: 51%
See 1 more Smart Citation
“…For more details, see Appendix B. Our bi-encoder achieves mean precision@10 score of 45.67 on TREC 2016 data in 5-fold cross-validation, comparable to stateof-the-art results (Das et al, 2020).…”
Section: Dense Retrieval Modelmentioning
confidence: 51%
“…This embedding is run through a linear layer to produce a relevance score, trained using crossentropy loss with respect to document relevance labels from the TREC 2016 dataset. Our crossencoder achieves a mean precision@10 score of 48.33 on TREC 2016 in 5-fold cross-validation, which is also comparable to state-of-the-art performance on TREC CDS 2016 (Das et al, 2020).…”
Section: Reranker Modelmentioning
confidence: 53%
“…Another reason for limited prediction accuracy in traditional ML models is the lack of usage of semantic and syntactic information of the text. Recent advancements in natural language processing (NLP) such as word embeddings, transformers, and large language models (LLMs) have better capabilities in terms of understanding and utilization of syntactic and contextual semantic information of the text [1,2,3], which is likely to improve the classi cation performance.…”
Section: Introductionmentioning
confidence: 99%