Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of 2006
DOI: 10.3115/1220835.1220881
|View full text |Cite
|
Sign up to set email alerts
|

Aggregation via set partitioning for natural language generation

Abstract: The role of aggregation in natural language generation is to combine two or more linguistic structures into a single sentence. The task is crucial for generating concise and readable texts. We present an efficient algorithm for automatically learning aggregation rules from a text and its related database. The algorithm treats aggregation as a set partitioning problem and uses a global inference procedure to find an optimal solution. Our experiments show that this approach yields substantial improvements over a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
42
0

Year Published

2007
2007
2019
2019

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 47 publications
(43 citation statements)
references
References 11 publications
1
42
0
Order By: Relevance
“…In most cases, these works combine hard constraints with learning algorithms in the Roth andYih (2004, 2007), a series of works proposed and studied models that incorporate learned models with declarative constraints with successful applications in Natural Language Processing and Information Extraction, including semantic role labeling (Roth and Yih 2005;Punyakanok et al 2005aPunyakanok et al , 2008, summarization (Clarke and Lapata 2006;Barzilay and Lapata 2006), generation (Marciniak and Strube 2005) and co-reference resolution (Denis and Baldridge 2007). Most of these works use only hard constraints with the factored approach and with supervised classifiers.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In most cases, these works combine hard constraints with learning algorithms in the Roth andYih (2004, 2007), a series of works proposed and studied models that incorporate learned models with declarative constraints with successful applications in Natural Language Processing and Information Extraction, including semantic role labeling (Roth and Yih 2005;Punyakanok et al 2005aPunyakanok et al , 2008, summarization (Clarke and Lapata 2006;Barzilay and Lapata 2006), generation (Marciniak and Strube 2005) and co-reference resolution (Denis and Baldridge 2007). Most of these works use only hard constraints with the factored approach and with supervised classifiers.…”
Section: Related Workmentioning
confidence: 99%
“…In the earlier related works that made use of constraints, the constraints were assumed to be Boolean functions; in most cases, a high level (first order logic) description of the constraints was compiled into a set of linear inequalities, and exact inference was done using an integer linear programming formulation (ILP) (Roth and Yih 2004Punyakanok et al 2005a;Barzilay and Lapata 2006;Clarke and Lapata 2006). Although ILP can be intractable for very large-scale problems, it has been shown to be quite successful in practice when applied to many practical NLP tasks (Roth andYih 2005, 2007).…”
Section: Integer Linear Programmingmentioning
confidence: 99%
“…In particular, it has been shown an effective technique for combining the output of many local decision makers into a coherent, exact, global inference. [1] describes an automatic semantic aggregator that uses constraints to control the number of aggregated sentences and their lengths.…”
Section: Introductionmentioning
confidence: 99%
“…Main difference lies in the way that text is fed, Barzilay and Lapata [8] use clustered text and Walker et al [83] use raw text.…”
Section: Trainable Approaches For Aggregationmentioning
confidence: 99%
“…Barzilay and Lapata [8] formulate the hypothesis that targeted on finding a cluster of phrases from the given set which maximizes the defined utility function. This approach has many similar properties as one that is noticed earlier.…”
Section: Trainable Approaches For Aggregationmentioning
confidence: 99%