Proceedings of the 3rd Clinical Natural Language Processing Workshop 2020
DOI: 10.18653/v1/2020.clinicalnlp-1.18
|View full text |Cite
|
Sign up to set email alerts
|

Assessment of DistilBERT performance on Named Entity Recognition task for the detection of Protected Health Information and medical concepts

Abstract: Bidirectional Encoder Representations from Transformers (BERT) models achieve state-ofthe-art performance on a number of Natural Language Processing tasks. However, their model size on disk often exceeds 1 GB and the process of fine-tuning them and using them to run inference consumes significant hardware resources and runtime. This makes them hard to deploy to production environments. This paper fine-tunes DistilBERT, a lightweight deep learning model, on medical text for the named entity recognition task of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 11 publications
0
4
0
Order By: Relevance
“…Ensemble models boosted the performance of used models by many points. The authors of [ 61 ] used this version of BERT for detecting health information along with named entity recognition tasks. And the detection was improved by half which was promising.…”
Section: Types Of Classification Algorithmmentioning
confidence: 99%
“…Ensemble models boosted the performance of used models by many points. The authors of [ 61 ] used this version of BERT for detecting health information along with named entity recognition tasks. And the detection was improved by half which was promising.…”
Section: Types Of Classification Algorithmmentioning
confidence: 99%
“…This finding is reflected in the current data, which showcases RoBERTa's leading performance with an average F1 score of 0.8580. The work of [76] provides an interesting perspective by evaluating DistilBERT's performance on medical texts. The findings revealed that while DistilBERT achieves F1 scores comparable to those of BERT models on medical texts, its efficiency in runtime and resource usage stands out.…”
Section: Model Evaluation and Comparisonmentioning
confidence: 99%
“…Journal Pre-proof smaller and 60% faster compared to BERT-base whilst losing only 3 % in performance. DistilBERT is good enough to explore the use of deep learning for this classification task (Adoma et al, 2020;Abadeer, 2020). This DistilBERT model will classify relevant sentences into Dependency or Logic and is provided with pretrained encoder weights which were trained on a big corpus before and are updated with backpropagation.…”
Section: Deep Learning Classifiermentioning
confidence: 99%
“…A fine-tuned BERT model for NER tasks was used since it is considered one of the top deep learning techniques for NLP tasks which for the purposes of this study is enough (Adoma et al, 2020;Abadeer, 2020). BERT was given a problem specific dataset containing Logic sentences that have been tagged manually according to the tagging scheme described above.…”
Section: Decision Logic Tag Extractionmentioning
confidence: 99%