In the present study, we tested a computer-based procedure for assessing very concise summaries (50 words long) of two types of text (narrative and expository) using latent semantic analysis (LSA) in comparison with the judgments of four human experts. LSA was used to estimate semantic similarity using six different methods: four holistic (summary-text, summary-summaries, summary-expert summaries, and pregraded-ungraded summary) and two componential (summary-sentence text and summary-main sentence text). A total of 390 Spanish middle and high school students (14-16 years old) and six experts read a narrative or expository text and later summarized it. The results support the viability of developing a computerized assessment tool using human judgments and LSA, although the correlation between human judgments and LSA was higher in the narrative text than in the expository, and LSA correlated more with human content ratings than with human coherence ratings. Finally, the holistic methods were found to be more reliable than the componential methods analyzed in this study.
Background
The present study analysed how relevance instructions affect eye movement patterns and the performance in a summary task of six expository texts.
Methods
Forty‐one undergraduate students participated in the experiment; half of them were instructed to make an oral summary of the main ideas focusing on the ‘why’ question that appeared at the end of the first paragraph (specific relevance instruction), while the other half were instructed to make an oral summary of the main ideas of the text (general relevance instruction).
Results
Eye movement patterns revealed that specific instructions promoted more and longer fixations and more regressions for relevant information than general instructions. A higher percentage of words in the summary task related to relevant information was recalled when readers received specific instructions.
Conclusions
These findings suggest that relevance instructions influence how readers enact strategies to meet their reading goals and how these strategies are reflected on memory.
A B S T R A C TThe purpose of this study was to evaluate the effectiveness of an instructional program in Spanish to improve reading comprehension, LEE comprensivamente. The program's framework was based on targeting text level processes, in particular, inference making, meta-cognitive control, and knowledge of text structure. In addition, a word level process, such as vocabulary, was also trained. The program, which consisted of 16 80-minute sessions during a period of 8 weeks, was tested on 127 children of ages 8-10 from different schools in Buenos Aires. A parallel group remained as passive control in each class group. Assessing processes included vocabulary, monitoring, inference making, and reading comprehension general measures, before and after the intervention. Only the intervention group showed a statistically significant improvement. These findings suggest that interventions focused on skills related to vocabulary, inference making, monitoring, and the knowledge of text structure improve reading comprehension in a school setting.
In this paper, we present several proposals in order to improve the LSA tools to evaluate brief summaries (less than 50 words) of narrative and expository texts. First, we analyse the quality of six different methods assessing essays that have been widely employed before (Foltz et al., 2000). The second objective is to analyse how new algorithms inspired by some authors (Denhière et al., 2007) that try to emulate human behaviour to improve the reliability of LSA with human graders when assessing short summaries, compared with standard LSA use in expository text. Finally, we present an assessment method to combine LSA as a semantic computational linguistic model with ROUGE-N as a lexical model, to show how combining different automatic evaluation systems (LSA and ROUGE) can improve the quality of assessments in different academic levels.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.