2010
DOI: 10.1017/s1351324910000069
|View full text |Cite
|
Sign up to set email alerts
|

Instance-based natural language generation

Abstract: We investigate the use of instance-based ranking methods for surface realization in natural language generation. Our approach to instance-based natural language generation (IBNLG) employs two components: a rule system that 'overgenerates' a number of realization candidates from a meaning representation and an instance-based ranker that scores the candidates according to their similarity to examples taken from a training corpus. We develop an efficient search technique for identifying the optimal candidate base… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2011
2011
2018
2018

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(11 citation statements)
references
References 28 publications
(49 reference statements)
0
11
0
Order By: Relevance
“…Factored language models have been used for surface realization within the OpenCCG framework (White, Rajkumar, and Martin 2007;Espinosa, White, and Mehay 2008). More generally, chart generators for different grammatical formalisms have been trained from syntactic treebanks (Nakanishi, Miyao, & Tsujii 2005;Cahill and van Genabith 2006;White, Rajkumar, and Martin 2007), as well as from semantically annotated treebanks (Varges and Mellish 2001). Because manual syntactic annotation is costly and syntactic parsers do not necessarily perform well at labeling spoken language utterances, the present work focuses on the generation of surface forms directly from semantic concepts.…”
Section: Related Workmentioning
confidence: 99%
“…Factored language models have been used for surface realization within the OpenCCG framework (White, Rajkumar, and Martin 2007;Espinosa, White, and Mehay 2008). More generally, chart generators for different grammatical formalisms have been trained from syntactic treebanks (Nakanishi, Miyao, & Tsujii 2005;Cahill and van Genabith 2006;White, Rajkumar, and Martin 2007), as well as from semantically annotated treebanks (Varges and Mellish 2001). Because manual syntactic annotation is costly and syntactic parsers do not necessarily perform well at labeling spoken language utterances, the present work focuses on the generation of surface forms directly from semantic concepts.…”
Section: Related Workmentioning
confidence: 99%
“…For instance, another interesting convergence point between NLG and LDD is related to hybrid rule-based/machine learning approaches in NLG, which combine rule-based overgeneration of candidate texts with ranking based on machine learning techniques (e.g., the proposal by Varges and Mellish [31], described in Section 2.1). This kind of approaches resemble standard LDD algorithms which generate every possible candidate description (due to imprecision handling) and provide the fittest one according to several evaluation criteria.…”
Section: Discussionmentioning
confidence: 99%
“…Another interesting approach is given by Varges and Mellish in [31], who propose an overgeneration-and-ranking approach which generates many possible candidate output sentences through a rule-based grammar and then selects the fittest one. Gkatzia et al present in [32] a methodology that treats content selection as a multi-label classification problem. This approach was applied to the generation of student feedback reports based on data for several factors.…”
Section: Design Of a Nlg Systemmentioning
confidence: 99%
“…Generation decisions are taken using the instance-based KStar algorithm, which is shown to outperform a majority baseline on all classification decisions. Instancebased approaches to nlg are also discussed by Varges and Mellish (2010), albeit in an overgenerate-and-rank approach where rules overgenerate candidates, which are then ranked by comparison to the instance base.…”
Section: Nlg As Classification and Optimisationmentioning
confidence: 99%