Although visual support in the form of pictures and video has been widely used in language teaching, there appears to be a dearth of research on the role of visual aids in L2 listening tests (Buck, 2000; Ockey, 2007) and the absence of sound theoretical perspectives on this issue (Ginther, 2001; Gruba, 1999). The existing studies of the role of visual support in L2 listening tests yielded inconclusive results. While some studies showed that visuals can improve test-takers' performance on L2 listening tests (e.g., Ginther, 2002), others revealed no facilitative effect of visuals on listening comprehension of test-takers (e.g., Coniam, 2001; Gruba, 1993; Ockey, 2007). The given study, conducted at Iowa State University in Spring 2008, investigated the influence of context visuals, namely a single photograph and video, on test-takers' performance on a computer-based Listening Test developed specifically for this study. The Listening Test, consisting of six listening passages and 30 multiple-choice questions, was administered to 34 international students from three English listening classes. In particular, the study examined whether test-takers perform differently on three types of listening passages: passages with a single photograph, video-mediated listening passages, and audioonly listening passages. In addition, participants' responses on the Post-Test Questionnaire were analyzed to determine whether their preferences of visual stimuli in listening tests corresponded with their actual performance on different types of visuals. The results indicated that while no difference was found between the scores for photo-mediated and audio-only listening passages, participants' performance on videomediated listening passages was significantly lower. 1 CHAPTER 1. INTRODUCTION This thesis is concerned with the role of visual support in second language (L2) listening comprehension. Specifically, this study focuses on the use of a single photograph and video in L2 listening tests and the impact of these visual elements in terms of their facilitative or distracting effect on L2 test-takers' performance. Although visuals have been used in L2 teaching and testing for a number of decades (Coniam, 2001; Ginther, 2001, 2002; Ockey, 2007), there is insufficient empirical evidence to date concerning the role of visual support in assessing L2 learners' listening comprehension. Statement of the Problem In light of advances in computer-assisted language learning (CALL) and the use of technology in testing, listening tests (such as the ones that are included in the listening section of TOEFL iBT) are being offered both online and in the offline medium. Although changes in technology have fueled the interest in visual instructional materials (Wetzel, Radtke, & Stern, 1994), there appears to be a dearth of research on the role of visual aids in L2 listening tests (Buck, 2000; Ockey, 2007) and the absence of sound theoretical perspectives on this issue (Ginther, 2001; Gruba, 1999). Early research on visual support suggested that one way to p...
Investigating how visuals affect test takers’ performance on video-based L2 listening tests has been the focus of many recent studies. While most existing research has been based on test scores and self-reported verbal data, few studies have examined test takers’ viewing behavior (Ockey, 2007; Wagner, 2007, 2010a). To address this gap, in the present study I employ eye-tracking technology to record the eye movements of 33 test takers during the Video-based Academic Listening Test (VALT). Specifically, I aim to explore test takers’ oculomotor engagement with two types of videos – context videos and content videos – from the VALT, and the relationship between the test takers’ viewing behavior and test performance. Eye-tracking measures comprising fixation rate, dwell rate, and the total dwell time for context and content videos were compared using paired-samples t-tests. Additionally, each measure was correlated with test scores for items associated with each video type. Results revealed statistically significant differences between fixation rates and between total dwell time values, but no difference between the dwell rates for context and content videos. No statistically significant relationship was found between the three eye-tracking measures and the test scores. Directions for future research on video-based L2 listening assessment are discussed.
No abstract
Automatic speech recognition (ASR) is an independent, machine‐based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker through a microphone, analyzes it using some pattern, model, or algorithm, and produces an output, usually in the form of a text (Lai, Karat, & Yankelovich, 2008).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.