In this paper, we present several proposals in order to improve the LSA tools to evaluate brief summaries (less than 50 words) of narrative and expository texts. First, we analyse the quality of six different methods assessing essays that have been widely employed before (Foltz et al., 2000). The second objective is to analyse how new algorithms inspired by some authors (Denhière et al., 2007) that try to emulate human behaviour to improve the reliability of LSA with human graders when assessing short summaries, compared with standard LSA use in expository text. Finally, we present an assessment method to combine LSA as a semantic computational linguistic model with ROUGE-N as a lexical model, to show how combining different automatic evaluation systems (LSA and ROUGE) can improve the quality of assessments in different academic levels.
Formative assessment and personalised feedback are commonly recognised as key factors both for improving students' performance and increasing their motivation and engagement (Gibbs and Simpson, 2005). Currently, in large and massive open online courses (MOOCs), technological solutions to give feedback are often limited to quizzes of different kinds. At present, one of our challenges is to provide feedback for open-ended questions through semantic technologies in a sustainable way.To face such a challenge, our academic team decided to use a test based on latent semantic analysis (LSA) and chose an automatic assessment tool named G-Rubric. G-Rubric was developed by researchers at the Developmental and Educational Psychology Department of UNED (Spanish national distance education university). By using G-Rubric, automated formative and iterative feedback was provided to students for different types of open-ended questions (70-800 words). This feedback allowed students to improve their answers and writing skills, thus contributing both to a better grasp of concepts and to the building of knowledge.In this paper, we present the promising results of our first experiences with UNED business degree students along three academic courses (2014-15, 2015-16 and 2016-17). These experiences show to what extent assessment software such as G-Rubric is mature enough to be used with students. It offers them enriched and personalised feedback that proved entirely satisfactory. Furthermore, G-Rubric could help to deal with the problems related to manual grading, even though our final goal is not to replace tutors by semantic tools, but to give support to tutors who are grading assignments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.