Foreign language listening anxiety (FLLA), which consists of various factors influencing listening performance, has been extensively investigated in English as a foreign language (EFL) contexts. However, little attention has been given to the effects of FLLA factors in different listening proficiency levels. This paper investigated 78 English majors from a Chinese university to examine the differences between and the effects of FLLA factors on listening performance in low (n = 20) and high-proficient (n = 19) EFL listeners. The participants were required to complete a 25-item FLLA questionnaire and take a listening test. The Mann-Whitney U test revealed that the two groups were significantly different in their self-belief in listening proficiency. Sequential multiple regression analyses showed that the listening-anxiety factor was a negative predictor, and the (lack of) self-belief factor was a positive predictor, for less proficient listeners. However, the three factors (including the decoding-skills factor) had no explanatory power in the high-proficient group's listening performance. Additionally, dissatisfaction with one's current listening proficiency may facilitate the less proficient listeners' performance but has a considerably detrimental impact on higher proficient listeners. Finally, pedagogical implications of FL listening anxiety and research suggestions are included.
Auto-generated captions on YouTube have proven useful in helping viewers better understand the words being spoken. However, at times they fail to contain accurate captions. In these cases, they lead to confusion. The aim of this paper is to identify and analyze errors in the auto-generated captions of 20 commencement speeches on YouTube. These speeches were presented over a period of 12 years by speakers from different walks of life. The researchers selected ten male and ten female icons. Only the first 10 minutes of the speeches were utilized for this investigation. All the captioned errors were collected and analyzed. Upon completion of the analysis, it was discovered that the frequency of errors in each speech ranged between 10 and 46 cases, with an average of one error occurring about every 26 seconds. Among the different error categories, nouns record the highest number with 144 cases (31.3%). The second is verbs with 93 cases (20.2%), then prepositions with 37 cases (8.1%). Among the four subcategories, namely omission, addition, substitution, and word order, substitution recorded the highest amount of errors with 357 cases (77.6%). Furthermore, the errors were classified into two major groups. The first, involving function words, appeared in 169 cases (36.7%). The second, involving content words, appeared in 291 cases (63.3%). The results of this research suggest that a continuous development of the voice recognition software that automatically generates captions is necessary for more efficient and accurate data that will help viewers and listeners better comprehend the video contents.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.