2021
DOI: 10.1162/tacl_a_00439
|View full text |Cite
|
Sign up to set email alerts
|

Instance-Based Neural Dependency Parsing

Abstract: Interpretable rationales for model predictions are crucial in practical applications. We develop neural models that possess an interpretable inference process for dependency parsing. Our models adopt instance-based inference, where dependency edges are extracted and labeled by comparing them to edges in a training set. The training edges are explicitly used for the predictions; thus, it is easy to grasp the contribution of each edge to the predictions. Our experiments show that our instance-based models achiev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 40 publications
(56 reference statements)
0
1
0
Order By: Relevance
“…Results in Table 3 show that on the GENIA corpus, the joint model could perform better than (1) the baseline and (2) the two span-based models: the instanced-based NER model (Ouchi et al, 2021) and the boundary-aware one (Zheng et al, 2019). However, our model produced lower F1 scores than BENSC (Tan et al, 2020) and MHSA (Xu et al, 2021).…”
Section: Resultsmentioning
confidence: 92%
“…Results in Table 3 show that on the GENIA corpus, the joint model could perform better than (1) the baseline and (2) the two span-based models: the instanced-based NER model (Ouchi et al, 2021) and the boundary-aware one (Zheng et al, 2019). However, our model produced lower F1 scores than BENSC (Tan et al, 2020) and MHSA (Xu et al, 2021).…”
Section: Resultsmentioning
confidence: 92%