Proceedings of the EACL 2003 Workshop on Evaluation Initiatives in Natural Language Processing Are Evaluation Methods, Metrics 2003
DOI: 10.3115/1641396.1641398
|View full text |Cite
|
Sign up to set email alerts
|

Reuse and challenges in evaluating language generation systems

Abstract: Although there is an increasing shift towards evaluating Natural Language Generation (NLG) systems, there are still many NLG-specific open issues that hinder effective comparative and quantitative evaluation in this field. The paper starts off by describing a task-based, i.e., black-box evaluation of a hypertext NLG system. Then we examine the problem of glass-box, i.e., module specific, evaluation in language generation, with focus on evaluating machine learning methods for text planning.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 10 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?