Proceedings of the 12th International Conference on Natural Language Generation 2019
DOI: 10.18653/v1/w19-8667
|View full text |Cite
|
Sign up to set email alerts
|

Generating Quantified Descriptions of Abstract Visual Scenes

Abstract: Quantified expressions have always taken up a central position in formal theories of meaning and language use. Yet quantified expressions have so far attracted far less attention from the Natural Language Generation community than, for example, referring expressions. In an attempt to start redressing the balance, we investigate a recently developed corpus in which quantified expressions play a crucial role; the corpus is the result of a carefully controlled elicitation experiment, in which human participants w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…(For example, the corpus can help us understand which quantifiers and QE patterns are most frequently used, and how elaborate a description needs to be -for example, when should the generator stop adding further QEs, be-cause it has provided enough information already, whether or not the scene has been described completely.) Examples of such a generation algorithm, based on the corpus of the present paper, can be found in Chen et al (2019).…”
Section: Discussionmentioning
confidence: 99%
“…(For example, the corpus can help us understand which quantifiers and QE patterns are most frequently used, and how elaborate a description needs to be -for example, when should the generator stop adding further QEs, be-cause it has provided enough information already, whether or not the scene has been described completely.) Examples of such a generation algorithm, based on the corpus of the present paper, can be found in Chen et al (2019).…”
Section: Discussionmentioning
confidence: 99%