Three-dimensional synthetic worlds introduce possibilities for nonverbal communication in computer-mediated language learning. This paper presents an original methodological framework for the study of multimodal communication in such worlds. It offers a classification of verbal and nonverbal communication acts in the synthetic world Second Life and outlines relationships between the different types of acts that are built into the environment. The paper highlights some of the differences between the synthetic world's communication modes and those of face-to-face communication and exemplifies the interest of these for communication within a pedagogical context.We report on the application of the methodological framework to a course in Second Life which formed part of the European project ARCHI21. This course, for Architecture students, adopted a Content and Learning Integrated Learning approach (CLIL). The languages studied were French and English. A collaborative building activity in the students' L2 is considered, using a method designed to organise the data collected in screen recordings and to code and transcribe the multimodal acts. We explore whether nonverbal communication acts are autonomous in Second Life or whether interaction between synchronous verbal and nonverbal communication exists. Our study describes how the distribution of the verbal and nonverbal modes varied depending on the pre-defined role the student undertook during the activity. We also describe the use of nonverbal communication to overcome verbal miscommunication where direction and orientation were concerned. In addition, we illustrate how nonverbal acts were used to secure the context for deictic references to objects made in the verbal mode. Finally, we discuss the importance of nonverbal and verbal communication modes in the proxemic organisation of students and the impact of proxemic organisation on the quantity of students' verbal production and the topics discussed in this mode.This paper seeks to contribute to some of the methodological reflections needed to better understand the affordances of synthetic worlds, including the verbal and nonverbal communication opportunities Second Life offers, how students use these and their impact on the interaction concerning the task given to students.
In webconferencing-supported teaching, the webcam mediates and organizes the pedagogical interaction. Previous research has provided a mixed picture of the use of the webcam: while it is seen as a useful medium to contribute to the personalization of the interlocutors' relationship, help regulate interaction and facilitate learner comprehension and involvement, the limited access to visual cues provided by the webcam is felt as useless or even disruptive.This study examines the meaning-making potential of the webcam in pedagogical interactions from a semiotic perspective by exploring how trainee teachers use the affordances of the webcam to produce non-verbal cues that may be useful for mutual comprehension. The research context is a telecollaborative project where trainee teachers of French as a foreign language met for online sessions in French with undergraduate Business students at an Irish university. Using multimodal transcriptions of the interaction data from these sessions, screen shot data, and students' post-course interviews, it was found, firstly, that whilst a head and shoulders framing shot was favoured by the trainee teachers, there does not appear to be an optimal framing choice for desktop videoconferencing among the three framing types identified. Secondly, there was a loss between the number of gestures performed by the trainee teachers and those that were visible for the students. Thirdly, when trainee teachers were able to coordinate the audio and kinesic modalities, communicative gestures that were framed, and held long enough to be perceived by the learners, were more likely to be valuable for mutual comprehension.The study highlights the need for trainee teachers to develop critical semiotic awareness to gain a better perception of the image they project of themselves in order to actualise the potential of the webcam and add more relief to their online teacher presence.
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Supplementary material: doi.org/10.1016/j.system.2017.09.002International audienceThis paper focuses on instruction-giving practices, a crucial but under-researched aspect of online language tutorials. The context for this qualitative study is a telecollaborative exchange focussing on French as a foreign language. We investigate trainee teachers' instructions for a role-play rehearsal task during webconferencing-supported language teaching sessions. Multimodal (inter)action analysis (Jewitt, Bezemer, & O'Halloran, 2016; Norris, 2004) of the data from three sessions reveals how the trainees mark different stages in the instructions using gaze and webcam proximity, allocate roles helped by the use of gaze (Satar, 2013) and gestures (Guichon & Wigham, 2016; McNeill, 1992), and introduce key vocabulary using word-stress, gaze and text chat strategies. The paper sheds light on the need to demonstrate clear boundaries between instructions and beginning of the task and the need, in future online teacher training programmes, to prepare trainees to direct learners' attention to the resources needed for task accomplishment, explain how the task will be accomplished using the online resources and harness the potential of semiotic resources during this teaching phase
Higher education institutions are increasingly interested in offering more flexible teaching and learning delivery methods that are often independent of place. Where foreign language learning is concerned, telecollaboration is gaining ground. This paper focuses on synchronous webconferencing-supported teaching and examines how different semiotic resources are used during lexical explanation sequences. The context is a telecollaborative exchange between Business students learning French and trainee teachers on a Master’s programme in Teaching French as a Foreign Language. Using multimodal transcriptions of interaction data from two sessions, the sequential analysis provides access to different combinations of semiotic resources. These include using the visual mode to project active listening strategies and the complementary role of the text chat to secure common ground concerning the target item. The analysis sheds light on a ‘thinking break’ strategy employed by the trainees. Descriptive examples demonstrate how verbal explanations were accompanied, firstly, by deictic and iconic gestures and, secondly, by metaphoric gestures used to help forefront different properties of the target item. Finally, changes in gaze and proximity were observed as playing a role in interaction management and in signalling which verbal modality was forefronted. The study illustrates emerging pedagogical and multimodal communication strategies for ‘doing vocabulary teaching’.
This study on online L2 interactions compares lexical word search between an audioconferencing and a videoconferencing condition. Nine upper-intermediate learners of English describe a previously unseen photograph in either the videoconferencing or the audioconferencing condition. A semantic feature analysis is adopted to compare their interactions. To evaluate the contribution of visual and verbal modes, a quantitative analysis examines the distribution of the referential properties of one target lexical item: tunnel earring. It suggests that pushed output produced in the videoconferencing condition is lexically richer. Then, in view of these results, focusing on two learners, one from the audioconferencing condition and one from the videoconferencing condition, a fine-grained multimodal analysis of the qualitative features of gestures and speech complements the quantitative results. It demonstrates how the videoconferencing condition allows the learner to embody salient physical referential properties of the lexical item, before transferring the referential information to the verbal mode, to produce a semantically rich description. The study will interest researchers working on multimodality and L2 teachers deciding between videoconferencing and audioconferencing as pedagogical options.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.