2007
DOI: 10.3758/bf03193148
|View full text |Cite
|
Sign up to set email alerts
|

Assessing the format of the presentation of text in developing a Reading Strategy Assessment Tool (R-SAT)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
30
0

Year Published

2008
2008
2023
2023

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 23 publications
(31 citation statements)
references
References 15 publications
1
30
0
Order By: Relevance
“…Each answer to the target sentences was automatically scored by identifying the number of content words in the answer that was also in the text or in an ideal answer (Gilliam et al, 2007;Magliano et al, in press2011). Content words included nouns, adverbs, adjectives and verbs (semantically depleted verbs, such as is, are, were omitted).…”
Section: Participantsmentioning
confidence: 99%
See 1 more Smart Citation
“…Each answer to the target sentences was automatically scored by identifying the number of content words in the answer that was also in the text or in an ideal answer (Gilliam et al, 2007;Magliano et al, in press2011). Content words included nouns, adverbs, adjectives and verbs (semantically depleted verbs, such as is, are, were omitted).…”
Section: Participantsmentioning
confidence: 99%
“…To assess the change in self-explanation usage and the alignment between selfexplanation processes and comprehension, we used an automated reading strategy assessment tool called the Reading Strategy Assessment Tool (RSAT; Gilliam, Magliano, Millis, Levinstein, & Boonthum, 2007;Magliano, et al, 2011). RSAT was developed to provide a computer-based approach for collecting and analyzing verbal protocols produced while reading texts.…”
Section: Changing How Students Process and Comprehend Texts With Compmentioning
confidence: 99%
“…Although not a typical use, LSA "in the small" has been utilized for evaluating students' answers against predefined system answers in several ITSs including Autotutor [28], iSTART [64], and R-SAT [22]. We also employed LSA in our system for this purpose, making use of the Gensim library [76] implementation of LSA with stopwords removed and a similarity threshold ranging between 0.5-0.85.…”
Section: Measuring Semantic Similarity Between Argumentsmentioning
confidence: 99%
“…RSAT is a computer-administered test that is designed to assess a student's level of comprehension and the processes that support it while the student is reading (Gilliam et al, 2007;Magliano et al, 2011). Students read texts one sentence at a time and are prompted to answer indirect questions ("what are you thinking now") that require responses that are akin to thinking aloud (Trabassso & Magliano, 1996).…”
Section: Two Approaches For Developing Assessment Targetsmentioning
confidence: 99%
“…We define student-constructed responses as those that require a student to produce an answer in natural language that may range from a couple of sentences to several paragraphs. These advances have been in the context of computer-based assessments of explanations and think-aloud protocols during reading comprehension (Gilliam, Magliano, Millis, Levinstein, & Boonthum, 2007;Magliano, Millis, the RSAT Development Team, Levinstein, & Boonthum, 2011), the grading of essays and text summaries (Attali & Burstein, 2006;Burstein, Marcu, & Knight, 2003;Franzke, Kintsch, Caccamise, Johnson, & Dooley, 2005;Landauer, Laham, & Foltz, 2003), the grading of short-answer questions (Leacock & Chodorow, 2003), and intelligent tutoring systems and trainers that require students to produce constructed responses during interactive conversations (Graesser, Jeon, & Dufty, 2008;Litman et al, 2006;McNamara, Levinstein, & Boonthum, 2004;VanLehn et al, 2007). These can take the form of directed responses to specific questions or less directed thinkaloud and self-explanation responses.…”
mentioning
confidence: 99%