This study investigates caption‐reading behavior by foreign language (L2) learners and, through eye‐tracking methodology, explores the extent to which the relationship between the native and target language affects that behavior. Second‐year (4th semester) English‐speaking learners of Arabic, Chinese, Russian, and Spanish watched 2 videos differing in content familiarity, each dubbed and captioned in the target language. Results indicated that time spent on captions differed significantly by language: Arabic learners spent more time on captions than learners of Spanish and Russian. A significant interaction between language and content familiarity occurred: Chinese learners spent less time on captions in the unfamiliar content video than the familiar, while others spent comparable times on each. Based on dual‐processing and cognitive load theories, we posit that the Chinese learners experienced a split‐attention effect when verbal processing was difficult and that, overall, captioning benefits during the 4th semester of language learning are constrained by L2 differences, including differences in script, vocabulary knowledge, concomitant L2 proficiency, and instructional methods. Results are triangulated with qualitative findings from interviews.
Since its 1947 founding, ETS has conducted and disseminated scientific research to support its products and services, and to advance the measurement and education fields. In keeping with these goals, ETS is committed to making its research freely available to the professional community and to the general public. Published accounts of ETS research, including papers in the ETS Research Report series, undergo a formal peer-review process by ETS staff to ensure that they meet established scientific and professional standards. All such ETS-conducted peer reviews are in addition to any reviews that outside organizations may provide as part of their own publication processes. AbstractThe Fall 2007 and Spring 2008 pilot tests for the CBAL™ Writing assessment included experimental keystroke logging capabilities. This report documents the approaches used to capture the keystroke logs and the algorithms used to process the outputs. It also includes some preliminary findings based on the pilot data. In particular, it notes that the distribution of most of the pause length is consistent with data generated from a mixture of lognormal distributions. This corresponds to a cognitive model in which some pauses are merely part of the transcription (i.e., typing) process and some are part of more involved cognitive process (e.g., attention to writing conventions, word choice, and planning). In the pilot data, many of the features extracted from the keystroke logs were correlated with human scores. Due to the small sample sizes of the pilot studies, these findings are suggestive, not conclusive; however, they suggest a line of analysis for a large sample containing keystroke logging ga thered in the fall of 2009.
Applications of locative media (e.g., place‐based mobile augmented reality [AR]) are used in various educational content areas and have been shown to provide learners with valuable opportunities for investigation‐based learning, location‐situated social and collaborative interaction, and embodied experience of place (Squire, 2009; Thorne & Hellermann, 2017; Zheng et al., 2018). Mobile locative media applications’ value for language learning, however, remains underinvestigated. To address this lacuna, this study employed the widely used construct of language‐related episodes (LREs; Swain & Lapkin, 1998) as a unit of analysis to investigate language learning through participation in a mobile AR game. Analysis of videorecorded interactions of four mixed‐proficiency groups of game players (two English language learners [ELLs] and one expert speaker of English [ESE] per group) indicates that LREs in this environment were focused on lexical items relevant to the AR tasks and physical locations. Informed by sociocultural theory and conversation analysis, the microgenesis of learners’ understanding and subsequent use of certain lexical items are indicated in the findings. This understanding of new lexical items was frequently facilitated by ESEs’ assistance and the surrounding physical environment. A strong goal orientation by both ESEs and ELLs was visible, providing implications for task‐based language teaching approaches.
To address the problem of limited opportunities for practicing second language speaking in interaction, especially delicate interactions requiring pragmatic competence, we describe computer simulations designed for the oral practice of extended pragmatic routines and report on the affordances of such simulations for learning pragmatically appropriate communication. Twelve highly proficient learners of English completed six simulated conversations focused on making requests in academic contexts. Evidence of learning was examined microgenetically by comparing data across the simulated conversations and triangulated by written reflections, surveys, and interviews. Results showed that participants gained content and linguistic forms from expert speaker models, and their interactions in scenario-based simulations indicated greater pragmatic awareness and changes in oral production over time. The majority of participants viewed the program positively, commenting on features such as its authenticity and predictive accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.