Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) 2014
DOI: 10.3115/v1/p14-2041
|View full text |Cite
|
Sign up to set email alerts
|

Content Importance Models for Scoring Writing From Sources

Abstract: Selection of information from external sources is an important skill assessed in educational measurement. We address an integrative summarization task used in an assessment of English proficiency for nonnative speakers applying to higher education institutions in the USA. We evaluate a variety of content importance models that help predict which parts of the source material should be selected by the test-taker in order to succeed on this task.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 19 publications
(14 citation statements)
references
References 11 publications
0
13
0
Order By: Relevance
“…One general approach has been to compare the content in the test response with elements from the stimulus materials presented to the test taker in the source‐based task, such as a listening passage or an article. This approach has resulted in some features that have significant correlations with human scores but that do not perform as well as the features calculated using models trained on human‐scored responses; Evanini, Xie, and Zechner () have presented results using this type of prompt‐based feature for spoken responses, and Beigman Klebanov, Madnani, Burstein, and Somasundaran () have presented results using prompt‐based features for essays.…”
Section: Discussionmentioning
confidence: 99%
“…One general approach has been to compare the content in the test response with elements from the stimulus materials presented to the test taker in the source‐based task, such as a listening passage or an article. This approach has resulted in some features that have significant correlations with human scores but that do not perform as well as the features calculated using models trained on human‐scored responses; Evanini, Xie, and Zechner () have presented results using this type of prompt‐based feature for spoken responses, and Beigman Klebanov, Madnani, Burstein, and Somasundaran () have presented results using prompt‐based features for essays.…”
Section: Discussionmentioning
confidence: 99%
“…We use Galley and McKeown (2003) lexical chaining and extract the first set of features (LEX1) introduced in Somasundaran et al (2014). We do not implement the second set because we do not have the annotation or the tagger to tag discourse cues.…”
Section: Methodsmentioning
confidence: 99%
“…Lexical chains (Somasundaran et al, 2014) and entity grids (Burstein et al, 2010) have been used to measure lexical cohesion. In other words, these models measure the continuity of lexical meaning.…”
Section: Topic-grid and Topic Chainsmentioning
confidence: 99%
See 2 more Smart Citations