2016
DOI: 10.1007/978-3-319-47674-2_16
|View full text |Cite
|
Sign up to set email alerts
|

Definition Extraction with LSTM Recurrent Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(17 citation statements)
references
References 8 publications
0
17
0
Order By: Relevance
“…(4) E&S: A system based on more complex dependency-based features (Espinosa-Anke and Saggion, 2014). (5) LSTM-POS: An LSTM-based system which represents each sentence as a mixture of infrequent words and frequent words' associated part-ofspeech (Li et al, 2016).…”
Section: Baselinesmentioning
confidence: 99%
“…(4) E&S: A system based on more complex dependency-based features (Espinosa-Anke and Saggion, 2014). (5) LSTM-POS: An LSTM-based system which represents each sentence as a mixture of infrequent words and frequent words' associated part-ofspeech (Li et al, 2016).…”
Section: Baselinesmentioning
confidence: 99%
“…LSTM-CRF A deep learning model for sequence labeling for DE based on LSTM and CRF (Li, Xu, and Chung 2016).…”
Section: Resultsmentioning
confidence: 99%
“…This table also shows that our model can benefit from the contextualized embeddings (e.g., BERT (Devlin et al 2019)) as BERT can significantly improve the proposed model over three out of four datasets. P R F1 WCL (Navigli and Velardi 2010b) 98.8 60.7 75.2 DefMiner (Jin et al 2013) 92.0 79.0 85.0 B&DC (Boella et al 2014) 88.0 76.0 81.6 E&S (Espinosa- Anke et al 2016) 85.9 85.3 85.4 LSTM-POS (Li, Xu, and Chung 2016) Sequence Classification Performance This section evaluates the models on the sentence classification task for DE. Due to its popularity in the DE literature for this setting, we first report the performance of the models on the general domain dataset WCL.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…DefMiner (Jin et al, 2013) 52.5 / 49.5 / 50.5 ----LSTM-CRF (Li et al, 2016) 57.1 / 55.9 / 56.2 ----GCDT (Liu et al, 2019a) 57.9 / 56.6 / 57. We replace all citations and references to figures, tables, and sections with corresponding placeholders (e.g., CITATION, FIGURE), but keep raw T E X format of mathematical symbols in order to retain the structure of the equations.…”
Section: Error Analysis On Predicted Definitionsmentioning
confidence: 99%