Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1629
|View full text |Cite
|
Sign up to set email alerts
|

Discourse Representation Parsing for Sentences and Documents

Abstract: We introduce a novel semantic parsing task based on Discourse Representation Theory (DRT; Kamp and Reyle 1993). Our model operates over Discourse Representation Tree Structures which we formally define for sentences and documents. We present a general framework for parsing discourse structures of arbitrary length and granularity. We achieve this with a neural model equipped with a supervised hierarchical attention mechanism and a linguistically-motivated copy strategy. Experimental results on sentence-and docu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
41
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 24 publications
(43 citation statements)
references
References 32 publications
2
41
0
Order By: Relevance
“…We found that adding character-level representations generally improved performance, though we did not find a clear preference for either the oneencoder or two-encoder model. We believe that, given the better performance of the two-encoder model on the fairly short documents of the non-English languages (see Figure 3), this model is likely the most useful in semantic parsing tasks with single sentences, such as SQL parsing (Zelle and Mooney, 1996;Iyer et al, 2017;Finegan-Dollak et al, 2018), while the one encoder char-CNN model has more potential for tasks with longer sentences/documents, such as AMR (Banarescu et al, 2013), UCCA (Abend and Rappoport, 2013) and GMB-based DRS parsing Liu et al, 2018Liu et al, , 2019a). The latter model also has more potential to be applicable for other (semantic parsing) systems as it can be applied to all systems that form token-level representations from a document.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We found that adding character-level representations generally improved performance, though we did not find a clear preference for either the oneencoder or two-encoder model. We believe that, given the better performance of the two-encoder model on the fairly short documents of the non-English languages (see Figure 3), this model is likely the most useful in semantic parsing tasks with single sentences, such as SQL parsing (Zelle and Mooney, 1996;Iyer et al, 2017;Finegan-Dollak et al, 2018), while the one encoder char-CNN model has more potential for tasks with longer sentences/documents, such as AMR (Banarescu et al, 2013), UCCA (Abend and Rappoport, 2013) and GMB-based DRS parsing Liu et al, 2018Liu et al, , 2019a). The latter model also has more potential to be applicable for other (semantic parsing) systems as it can be applied to all systems that form token-level representations from a document.…”
Section: Discussionmentioning
confidence: 99%
“…More recently, Liu et al (2018) proposed a neural model that produces (treestructured) DRSs in three steps by first learning the general (box) structure of a DRS, after which specific conditions and referents are filled in. In followup work (Liu et al, 2019a) they extend this work by adding an improved attention mechanism and constraining the decoder to ensure well-formed output. This model achieved impressive performance on both sentence-level and document-level DRS parsing on GMB data.…”
Section: Discourse Representation Structuresmentioning
confidence: 99%
“…We compare two published systems on the GMB: DRTS-sent which is a sentence-level parser (Liu et al, 2018) and DRTS-doc which is a documentlevel parser (Liu et al, 2019a). On the PMB, we compare seven systems: Boxer, a CCG-based parser (Bos, 2015), AMR2DRS, a rule-based parser that converts AMRs to DRSs, SIM-SPAR giving the DRS in the training set most similar to the current DRS, SPAR giving a fixed DRS for each sentence, seq2seq-char, a character-based sequence-tosequence clause parser (van Noord et al, 2018b), seq2seq-word, a word-based sequence-to-sequence clause parser, and a transformer-based clause parser (Liu et al, 2019b).…”
Section: Methodsmentioning
confidence: 99%
“…Despite the large amount of recently developed DRS parsing models (van Noord et al, 2018b;van Noord, 2019;Evang, 2019;Liu et al, 2019b;Fancellu et al, 2019;Le et al, 2019), the automatic evaluation of DRSs is not straightforward due to the non-standard DRS format shown in Figure 1(a). It is neither a tree (although a DRS-to-tree conversion exists; see Liu et al 2018Liu et al , 2019a for details) nor a graph. Evaluation so far relied on COUNTER (van Noord et al, 2018a) which converts DRSs to clauses shown in Figure 1…”
Section: Introductionmentioning
confidence: 99%
“…Recent studies on meaning representation parsing (MRP) have focused on different semantic graph frameworks (Oepen et al, 2019) such as bilexical semantic dependency graphs (Peng et al, 2017;Wang et al, 2018;Dozat and Manning, 2018;Na et al, 2019), a universal conceptual cognitive annotation (Hershcovich et al, 2017(Hershcovich et al, , 2018, abstract meaning representation (Wang and Xue, 2017;Guo and Lu, 2018;Song et al, 2019;Lam, 2019, 2020;Zhou et al, 2020), and a discourse representation structure (Abzianidze et al, 2019;van Noord et al, 2018;Liu et al, 2019;Evang, 2019;Liu et al, 2020). To jointly address various semantic graphs, the aim of the Cross-Framework MRP task at the 2020 Conference (MRP 2020) on Computational Natural Language Learning (CoNLL) is to develop semantic graph parsing across the following five frameworks (Oepen et al, 2020): 1) EDS: Elementary Dependency Structures (Oepen and Lønning, 2006), 2) PTG: Prague Tectogrammatical Graphs (Hajič et al, 2012), 3) UCCA: Universal Conceptual Cognitive Annotation (Abend and Rappoport, 2013), 4) AMR: Abstract Meaning Representation (Banarescu et al, 2013), and 5) DRG: Discourse Representation Graphs (Abzianidze et al, 2017).…”
Section: Introductionmentioning
confidence: 99%