2021
DOI: 10.48550/arxiv.2109.14927
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

BERT got a Date: Introducing Transformers to Temporal Tagging

Abstract: Temporal expressions in text play a significant role in language understanding and correctly identifying them is fundamental to various retrieval and natural language processing systems. Previous works have slowly shifted from rule-based to neural architectures, capable of tagging expressions with higher accuracy. However, neural models can not yet distinguish between different expression types at the same level as their rule-based counterparts. In this work, we aim to identify the most suitable transformer ar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 17 publications
(37 reference statements)
0
4
0
Order By: Relevance
“…We demonstrated the effectiveness of our proposed method using a variety of pre-train models of varying sizes. Centralised (Ahmed, Lin Jerry, & Srivastava, 2022b), FedAVG (Li et al, 2019) and Embedded averages were used to evaluate BERT model (Almasian et al, 2021).…”
Section: Experimental Results and Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…We demonstrated the effectiveness of our proposed method using a variety of pre-train models of varying sizes. Centralised (Ahmed, Lin Jerry, & Srivastava, 2022b), FedAVG (Li et al, 2019) and Embedded averages were used to evaluate BERT model (Almasian et al, 2021).…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…The active learning process is shown in Figures 1, 2, and 3. The proposed technique first trains a classifier (model) for each labelled dataset using the temporally pre-trained network (Almasian et al, 2021). The pseudo-labels are then applied to the broader pool of unlabeled data to create highly reliable pseudo-labels.…”
Section: Temporal Focus Time Attention Networkmentioning
confidence: 99%
“…Recently, there has been an increase in attention to the infusion of temporal information into contextualized embeddings with the goal of improving prediction tasks. However, the focus has primarily been on temporal relation prediction (Liu et al, 2019 ; Guan and Devarakonda, 2020 ) with some recent work on temporal tagging in the general domain (Almasian et al, 2022 ) and prediction of clinical outcomes (Pang et al, 2021 ). As of yet, there are no publications utilizing contextualized embeddings for the task of temporal disambiguation of relative temporal expressions.…”
Section: Methodsmentioning
confidence: 99%
“…Seq-to-Seq: Previous works have reported sequence to sequence approach using the encoder-decoder architecture as an alternative for the sequence tagging scheme (Almasian, Aumiller, and Gertz 2021;Yan et al 2021). Following these works, we use a transformer-based generative framework to auto-regressively decode the acronyms and long-forms present in input sentence.…”
Section: Baselinesmentioning
confidence: 99%