2022
DOI: 10.1016/j.jbi.2021.103982
|View full text |Cite
|
Sign up to set email alerts
|

AMMU: A survey of transformer-based biomedical pretrained language models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
63
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 124 publications
(65 citation statements)
references
References 143 publications
0
63
0
Order By: Relevance
“…Machine learning and artificial intelligence have been increasingly applied in various domain such as computer vision 51,52 , natural language processing [53][54][55] , drug discovery 56,57 , QSAR [58][59][60] , and genomics [61][62][63] . AI methods such as convolutional neural networks (CNNs) 64 and recurrent neural networks (RNNs) 65 that are extensively used in computer vision and natural language processing have been investigated for identifying protein binding sites in DNA and RNA sequences, and achieved state-of-the-art performance [66][67][68] .…”
Section: Discussionmentioning
confidence: 99%
“…Machine learning and artificial intelligence have been increasingly applied in various domain such as computer vision 51,52 , natural language processing [53][54][55] , drug discovery 56,57 , QSAR [58][59][60] , and genomics [61][62][63] . AI methods such as convolutional neural networks (CNNs) 64 and recurrent neural networks (RNNs) 65 that are extensively used in computer vision and natural language processing have been investigated for identifying protein binding sites in DNA and RNA sequences, and achieved state-of-the-art performance [66][67][68] .…”
Section: Discussionmentioning
confidence: 99%
“…Another emerging area is exploring generalized zero-shot learning (GZSL) [ 36 ] where the training classes are presented only at test time. Further, the performance of domain-specific LMs can be improved by reducing biases and injecting human-curated knowledge bases [ 37 ].…”
Section: Limitations and Future Directionsmentioning
confidence: 99%
“…Previously proposed systems ( 7 , 25 ) provide scarce if not no interpretability at all. Instead, with GeMI we have made a first step toward achieving interpretability, which is paired with a functioning and effective system.…”
Section: Related Workmentioning
confidence: 99%
“…Few studies have addressed the problem of making the results explainable ( 35 ). In comparison, less approaches have employed transformed-based techniques for biomedical text extraction tasks ( 25 ), mainly focusing on entity relations ( 36 , 37 ). To the best of our knowledge, transformed-based approaches applied to biomedical tasks have not yet been combined with explainability approaches, as proposed in this article.…”
Section: Related Workmentioning
confidence: 99%