An adequate level of linguistic complexity in learning materials is believed to be of crucial importance for learning. The implication for school textbooks is that reading complexity should differ systematically between grade levels and between higher and lower tracks in line with what can be called the systematic complexification assumption. However, research has yet to test this hypothesis with a real-world sample of textbooks. In the present study, we used automatic measures from computational linguistic research to analyze 2,928 texts from geography textbooks from four publishers in Germany in terms of their reading demands. We measured a wide range of lexical, syntactic, morphological, and cohesion-related features and developed text classification models for predicting the grade level (Grades 5 to 10) and school track (academic vs. vocational) of the texts using these features. We also tested ten linguistic features that are considered to be particularly important for a reader’s understanding. The results provided only partial support for systematic complexification. The text classification models showed accuracy rates that were clearly above chance but with considerable room for improvement. Furthermore, there were significant differences across grade levels and school tracks for some of the ten linguistic features. Finally, there were marked differences among publishers. The discussion outlines key components for a systematic research program on the causes and consequences of the lack of systematic complexification in reading materials.
Despite the promise of research conducted at the intersection of computer‐assisted language learning (CALL), natural language processing, and second language acquisition, few studies have explored the potential benefits of using intelligent CALL systems to deepen our understanding of the process and products of second language (L2) learning. The strategic use of technology offers researchers novel methodological opportunities to examine how incremental changes in L2 development occur during treatment as well as how the longitudinal impacts of experimental interventions on L2 learning outcomes occur on a case‐by‐case basis. Drawing on the pilot results from a project examining the effects of automatic input enhancement on L2 learners’ development, this article explores how the use of technology offers additional methodological and analytical choices for the investigation of the process and outcomes of L2 development, illustrating the opportunities to study what learners do during visually enhanced instructional activities.
How can second language teachers retrieve texts that are rich in terms of the grammatical constructions to be taught, but also address the content of interest to the learners? We developed an Information Retrieval system that identifies the 87 grammatical constructions spelled out in the official English language curriculum of schools in Baden-Württemberg (Germany) and reranks the search results based on the selected (de)prioritization of grammatical forms. In combination with a visualization of the characteristics of the search results, the approach effectively supports teachers in prioritizing those texts that provide the targeted forms. The approach facilitates systematic input enrichment for language learners as a complement to the established notion of input enhancement: while input enrichment aims at richly representing the selected forms and categories in a text, input enhancement targets their presentation to make them more salient and support noticing.
In Foreign Language Teaching and Learning (FLTL), questions are systematically used to assess the learner's understanding of a text. Computational linguistic (CL) approaches have been developed to generate such questions automatically given a text (e.g., Heilman, 2011). In this paper, we want to broaden the perspective on the different functions questions can play in FLTL and discuss how automatic question generation can support the different uses.Complementing the focus on meaning and comprehension, we want to highlight the fact that questions can also be used to make learners notice form aspects of the linguistic system and their interpretation. Automatically generating questions that target linguistic forms and grammatical categories in a text in essence supports incidental focus-on-form (Loewen, 2005) in a meaning-focused reading task. We discuss two types of questions serving this purpose, how they can be generated automatically; and we report on a crowdsourcing evaluation comparing automatically generated to manually written questions targeting particle verbs, a challenging linguistic form for learners of English.
How can state-of-the-art computational linguistic technology reduce the workload and increase the efficiency of language teachers? To address this question, we combine insights from research in second language acquisition and computational linguistics to automatically generate text-based questions to a given text. The questions are designed to draw the learner’s attention to target linguistic forms – phrasal verbs, in this particular case – by requiring them to use the forms or their paraphrases in the answer. Such questions help learners create form-meaning connections and are well suited for both practice and testing. We discuss the generation of a novel type of question combining a wh- question with a gapped sentence, and report the results of two crowdsourcing evaluation studies investigating how well automatically generated questions compare to those written by a language teacher. The first study compares our system output to gold standard human-written questions via crowdsourcing rating. An equivalence test shows that automatically generated questions are comparable to human-written ones. The second crowdsourcing study investigates two types of questions (wh- questions with and without a gapped sentence), their perceived quality, and the responses they elicit. Finally, we discuss the challenges and limitations of creating and evaluating question-generation systems for language learners.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.