“…Besides the Levumi reading comprehension test, participants could complete additional CBM assessments with a complete text-based item pool, and other established reading comprehension screenings. This would establish if the Levumi reading comprehension test relies more on to code-related skills (e.g., reading fluency) than on language related skills (e.g., reading comprehension), as suggested by Muijselaar et al (2017). This can indicate which reading problems our test is effective at identifying.…”
Section: Limitations and Future Workmentioning
confidence: 87%
“…CBM-Maze was designed to monitor the growth of intermediate and secondary students' reading comprehension. More recent studies showed that CBM-Maze measures early language skills, such as sentence level comprehension and code-related skills rather than higher language skills, such as inference-making, text comprehension, and knowledge about text structures (Wayman et al, 2007;Graney et al, 2010;Muijselaar et al, 2017). Because CBM-Maze assesses earlier reading skills, it may be adapted for younger students, including low achieving students.…”
Reading comprehension at sentence level is a core component in the students' comprehension development, but there is a lack of comprehension assessments at the sentence level, which respect the theory of reading comprehension. In this article, a new web-based sentence-comprehension assessment for German primary school students is developed and evaluated using a curriculum-based measurement (CBM) framework. The test focuses on sentence level reading comprehension as an intermediary between word and text comprehension. The construction builds upon the theory of reading comprehension using CBM-Maze techniques. It is consistent on all tasks and contains different syntactic and semantic structures within the items. This paper presents the test development, a description of the item performance, an analysis of its factor structure, and tests of measurement invariance, and group comparisons (i.e., across gender, immigration background, over two measurement points, and the presence of special educational needs; SEN). Third grade students (n = 761) with and without SEN finished two CBM tests over 3 weeks. Results reveal that items had good technical adequacy, the constructed test is unidimensional, and it is valid for students both with and without SEN. Similarly, it is valid for both sexes, and results are valid across both measurement points. We discuss our method for creating a unidimensional test based on multiple item difficulties and make recommendations for future test construction.
“…Besides the Levumi reading comprehension test, participants could complete additional CBM assessments with a complete text-based item pool, and other established reading comprehension screenings. This would establish if the Levumi reading comprehension test relies more on to code-related skills (e.g., reading fluency) than on language related skills (e.g., reading comprehension), as suggested by Muijselaar et al (2017). This can indicate which reading problems our test is effective at identifying.…”
Section: Limitations and Future Workmentioning
confidence: 87%
“…CBM-Maze was designed to monitor the growth of intermediate and secondary students' reading comprehension. More recent studies showed that CBM-Maze measures early language skills, such as sentence level comprehension and code-related skills rather than higher language skills, such as inference-making, text comprehension, and knowledge about text structures (Wayman et al, 2007;Graney et al, 2010;Muijselaar et al, 2017). Because CBM-Maze assesses earlier reading skills, it may be adapted for younger students, including low achieving students.…”
Reading comprehension at sentence level is a core component in the students' comprehension development, but there is a lack of comprehension assessments at the sentence level, which respect the theory of reading comprehension. In this article, a new web-based sentence-comprehension assessment for German primary school students is developed and evaluated using a curriculum-based measurement (CBM) framework. The test focuses on sentence level reading comprehension as an intermediary between word and text comprehension. The construction builds upon the theory of reading comprehension using CBM-Maze techniques. It is consistent on all tasks and contains different syntactic and semantic structures within the items. This paper presents the test development, a description of the item performance, an analysis of its factor structure, and tests of measurement invariance, and group comparisons (i.e., across gender, immigration background, over two measurement points, and the presence of special educational needs; SEN). Third grade students (n = 761) with and without SEN finished two CBM tests over 3 weeks. Results reveal that items had good technical adequacy, the constructed test is unidimensional, and it is valid for students both with and without SEN. Similarly, it is valid for both sexes, and results are valid across both measurement points. We discuss our method for creating a unidimensional test based on multiple item difficulties and make recommendations for future test construction.
“…It should be noted that there are multiple ways to assess reading comprehension with variable influence from word reading ability (see: Keenan, Betjemann, & Olson, 2008). The Maze subtest was our only option for measuring comprehension within aimsweb; however, it has been shown that this type of assessment may be influenced by word reading more than other reading comprehension assessments using multiple choice or open response questions (Keenan et al, 2008;Muijselaar, Kendeou, de Jong, & van den Broek, 2017).…”
Section: Reading Comprehensionmentioning
confidence: 99%
“…However, Shankweiler et al (1999) also found a similarly large proportion of readers with mixed deficits (44%) using a composite score for multiple word reading and reading comprehension assessments. One possible explanation for these cross-study differences could be the variable influence of word reading abilities on different reading comprehension tasks (Keenan et al, 2008;Muijselaar et al, 2017). Given that Maze-type tasks are known to depend more on word reading skills compared to other reading comprehension tasks, it is possible that we identified an inflated number of students with mixed deficits and a deflated number of poor comprehenders in our sample due to the measure we used.…”
Section: Identifying Reader Profiles Using a Single Progress Monitorimentioning
“…A maze task, in which readers must circle the correct word out of three options for every seventh word in a text (Chung et al, 2018), was used to assess reading skill. This type of measure is thought to assess a variety of constructs important to reading, including word recognition, fluency, and comprehension (Muijselaar et al, 2017;Shin and McMaster, 2019). The 3-min measure used in this study has been found to be a reliable and valid measure of reading skill for undergraduate students (Hebert, 2016).…”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.