2015
DOI: 10.1007/978-3-319-19773-9_10
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Comprehension from Students’ Summaries

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 18 publications
0
3
0
Order By: Relevance
“…From a different perspective, the ReaderBench framework has also been used to assess the textual complexity of texts by providing a wide range of complexity indices covering surface, lexical, syntactic and semantic levels of discourse Dascalu, Stavarache, et al, 2015). In future research, we will examine the assessment of learning and comprehension in the context of collaborative discourse using analogous indices adapted for chat conversation (characterized by short contributions).…”
Section: Discussionmentioning
confidence: 99%
“…From a different perspective, the ReaderBench framework has also been used to assess the textual complexity of texts by providing a wide range of complexity indices covering surface, lexical, syntactic and semantic levels of discourse Dascalu, Stavarache, et al, 2015). In future research, we will examine the assessment of learning and comprehension in the context of collaborative discourse using analogous indices adapted for chat conversation (characterized by short contributions).…”
Section: Discussionmentioning
confidence: 99%
“…When researchers compared three methods for identifying students' mental models of a topic from the paragraphs: content-based measures derived from LSA, cohesionbased measures from Coh-Metrix, and word-weighting features specially derived from their corpus of paragraphs; they found that the word-weighting features outperformed the other approaches. To provide another example, researchers have found that neither general indicators of reading strategies nor indicators of textual complexity were effective at predicting 3-5 th graders comprehension of stories, but a machine learning approach using a combination of some of these features was effective (Dascalu et al, 2015).…”
Section: Automated Scoring Using Machine Learningmentioning
confidence: 99%
“…Magliano and Millis (2003) found that LSA variables used to measure semantic similarity between verbal text comprehension prompts and undergraduate students' responses to those prompts predicted scores on a comprehension test. Other tasks for which automated scoring has been attempted include open-ended short-answer questions (Brew & Leacock, 2013), computer-generated cloze tasks whereby a student must identify the missing word from a sentence in a passage (Stenner, Fisher, Stone, & Burdick, 2013), and students' summaries of texts (Dascalu et al, 2015).…”
Section: Previous Research On Automated Scoring In Reading and Writingmentioning
confidence: 99%