2020
DOI: 10.1007/978-981-15-6168-9_38
|View full text |Cite
|
Sign up to set email alerts
|

Transfer Learning for Information Extraction with Limited Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…It usually uses fine-tuning that adapts pretrained ML models with target training samples (Gururangan et al, 2020). In business scenarios, TL can help to reduce the number of training samples (Nguyen et al, 2020;.…”
Section: Seq2seq and Attentionmentioning
confidence: 99%
“…It usually uses fine-tuning that adapts pretrained ML models with target training samples (Gururangan et al, 2020). In business scenarios, TL can help to reduce the number of training samples (Nguyen et al, 2020;.…”
Section: Seq2seq and Attentionmentioning
confidence: 99%
“…However, it has the human annotation of three NLP tasks which are beneficial for the analysis of incident reports. In addition, a small number of samples is still helpful in business scenarios for training AI models by using transfer learning (Devlin et al, 2019;Nguyen et al, 2020.…”
Section: Quantitative Observationmentioning
confidence: 99%
“…We employed a pre-trained BERT as the embedding layer due to two reasons: (i) BERT has achieved promising results on 11 NLP tasks, including sequence tagging [5,13] and (ii) BERT was contextually trained on a huge amount of data, so it can encode word meaning, which is important for machine learning models.…”
Section: Embedding Layermentioning
confidence: 99%