Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1231
|View full text |Cite
|
Sign up to set email alerts
|

A Unified Neural Coherence Model

Abstract: Recently, neural approaches to coherence modeling have achieved state-of-the-art results in several evaluation tasks. However, we show that most of these models often fail on harder tasks with more realistic application scenarios. In particular, the existing models underperform on tasks that require the model to be sensitive to local contexts such as candidate ranking in conversational dialogue and in machine translation. In this paper, we propose a unified coherence model that incorporates sentence grammar, i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
31
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 32 publications
(40 citation statements)
references
References 17 publications
1
31
0
Order By: Relevance
“…propose a local discriminative model that retains the advantages of generative models and uses a smaller negative sampling space that can learn against incorrect orderings. Moon et al (2019) propose a unified model that incorporates sentence syntax, inter-sentence coherence relations, and global topic structures in a single Siamese framework.…”
Section: Coherence Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…propose a local discriminative model that retains the advantages of generative models and uses a smaller negative sampling space that can learn against incorrect orderings. Moon et al (2019) propose a unified model that incorporates sentence syntax, inter-sentence coherence relations, and global topic structures in a single Siamese framework.…”
Section: Coherence Modelsmentioning
confidence: 99%
“…They have been evaluated in mainly two ways. The most common approach has been to evaluate them on synthetic discrimination tasks that involve identifying the right order of the sentences at the local and global levels (Barzilay and Lapata, 2008;Elsner and Charniak, 2011b;Moon et al, 2019). The other (rather infrequent) way has been to assess the impact of coherence score as an additional feature in downstream tasks like readability assessment and essay scoring (Barzilay and Lapata, 2008;Mesgar and Strube, 2018).…”
Section: Introduction and Related Workmentioning
confidence: 99%
“…Mohiuddin et al (2018) further use neural entity grid to model the coherence of conversations. More recently, Moon et al (2019) incorporate local and global coherence model into a unified framework.…”
Section: Related Workmentioning
confidence: 99%
“…There exists a large body of work in linguistics regarding different notions of coherence, such as the influence of coreference (Hobbs, 1979;Barzilay and Lapata, 2008, inter alia), Centering theory (Grosz et al, 1995), discourse structure (Mann and Thompson, 1987;Webber et al, 2003), and phenomena that connect utterances in dialogue, such as conversational maxims (Grice, 1975) or speaker interaction (Lascarides and Asher, 2009). Many of these are also mentioned by coherence evaluation studies, nonetheless they mostly revert to the use of some form of sentence-order variations (Chen et al, 2019;Moon et al, 2019;Mesgar et al, 2020). While some progress has been made towards incorporating more linguistically motivated test sets (Chen et al, 2019;Mohammadi et al, 2020;Pishdad et al, 2020), most evaluation studies focus on models trained specifically on coherence classification and prediction tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Since sequentiality is central to the language modelling task, models successfully distinguish between both versions. This shuffling technique has been widely applied in the evaluation of coherence models (Barzilay and Lapata, 2008;Chen et al, 2019;Moon et al, 2019;Mesgar et al, 2020). We include it as baseline for our method, in order to contrast how more fine-grained notions of coherence compare to this broad approach.…”
Section: Sentence Order Baseline Test Suitementioning
confidence: 99%