Literature in the field of TESOL recruitment practices suggests that the myth of monolingual speakerism has impacted the employment methods in various countries in the world. The monolingual (native) speaker has a privileged position in English language teaching, representing both the model speaker and the ideal teacher. Bilingual teachers of English are often perceived as less competent than their monolingual counterparts in Oman. The aim of the study was to critically explore and problematize the recruitment practices that discriminate the bilingual English teachers in Oman. This article reports the findings of a small-scale qualitative study conducted at an English Language Center (ELC) at one of the colleges of technology in Oman (CoTs) through obtaining data from bilingual teachers of English. The results demonstrated that the native (monolingual) speakers’ fallacy is “alive and kicking” in Oman. All the recruiting agencies prefer to recruit monolingual speakers justifying this stance on the pretext that bilinguals are viewed as incompetent imitators of English. There is also a huge discrimination based on salary range between monolingual and bilingual teachers, despite doing same job. Colonial impact is another reason behind monolingual speakers’ preference. The impact of discrimination is that bilingual teachers of English are left feeling inferior. Hence, it is essential to adopt policies, which install greater sense of job security to enhance motivation and innovation. The study suggests that there is an urgent need to review the recruitment practices in Oman to establish equality and to create a healthy working environment.
Due to subjectivity in oral assessment, much concentration has been put on obtaining a satisfactory measure of consistency among raters. However, the process for obtaining more consistency might not result in valid decisions. One matter that is at the core of both reliability and validity in oral assessment is rater training. Recently, multifaceted Rasch measurement (MFRM) has been adopted to address the problem of rater bias and inconsistency in scoring; however, no research has incorporated the facets of test takers’ ability, raters’ severity, task difficulty, group expertise, scale criterion category, and test version together in a piece of research along with their two-sided impacts. Moreover, little research has investigated how long rater training effects last. Consequently, this study explored the influence of the training program and feedback by having 20 raters score the oral production produced by 300 test-takers in three phases. The results indicated that training can lead to more degrees of interrater reliability and diminished measures of severity/leniency, and biasedness. However, it will not lead the raters into total unanimity, except for making them more self-consistent. Even though rater training might result in higher internal consistency among raters, it cannot simply eradicate individual differences related to their characteristics. That is, experienced raters, due to their idiosyncratic characteristics, did not benefit as much as inexperienced ones. This study also showed that the outcome of training might not endure in long term after training; thus, it requires ongoing training throughout the rating period letting raters regain consistency.
Omani students were introduced to Independent Learning Tools, such as MyELT, Moodle, and MS Teams, during the Covid-19 pandemic. They used these tools for their study throughout the pandemic. Hence, this research investigated how satisfied were Omani students with independent learning tools during Covid-19. This study is significant because it has pedagogical implications for all the stakeholders, such as teachers, students, and policymakers. This study adopted a quantitative research method. A self-prepared questionnaire was distributed to students for data collection. Study participants were students from Level one, Level two, Level three, and Level four of the General Foundation Program in the English Language Center at the University of Technology and Applied Sciences-Ibra, Oman. About 227 (N=227) students participated in the survey. Study findings suggest that students’ satisfaction with independent learning tools is above average. Conducting similar research studies in other Higher Educational Institutions in Oman will help make and sustain policy decisions.
Since cultural factors play a crucial role in creating behavioral patterns, investigating the relationship between English teachers and students can be a good index for discovering the level of power distance in the classroom environment with different cultures manifesting in their interactions. The current study has attempted to compare female high school students' viewpoints towards English teachers and non-English teachers in the Iran context to discover the difference in power distance between English and non-English teachers and their students. To this end, the present research was conducted in 3 high schools for females with female teachers, and the data was gathered through a five-item Likert scale questionnaire investigating students' viewpoints towards five main elements: Acceptability, Respect, Teaching method, behavioral patterns, and Friendship. The findings revealed a high power distance between English teachers and their students in an English class interaction than non-English teachers such as science teachers, math teachers, physics teachers, chemistry teachers, and art teachers and their students. In turn, the results implied positive viewpoints towards English teachers. Regarding four factors, Acceptability, Respect, Teaching method, and Behavior, there is a significant difference between the viewpoints towards English and non-English teachers. On the other hand, there is no significant difference between the two variables in terms of friendship. Received: 14 October 2021 / Accepted: 10 January 2022 / Published: 5 March 2022
Student-centered learning assessment (SCLA) constitutes a major component of current educational initiatives at the University of Technology and Applied Sciences (UTAS). However, little research has been conducted on English teachers’ understanding and practices of SCL assessment. Therefore, this study seeks to explore English teachers’ understanding and practices of SCL assessment at UTAS in Oman. The findings could provide information regarding teachers’ understanding and practices of SCLA. The findings may contribute to how English teachers define SCLA, what SCL-related activities they conduct, and how often these are conducted. Sixty-one teachers participated in the study with an average of 24 years of experience. A series of interviews and questions were used to elicit data from the participants. A questionnaire was used to explore teachers’ understanding of SCLA. Interviews were used in conjunction with the questionnaires to obtain more detailed information from the participants. The findings of this study showed that each of the English teachers has their definitions and understandings of SCLA; however, it was difficult to understand teachers’ definitions of SCLA due to the lack of a common definition for this term in the literature. Teachers should be encouraged to empower students by working in mixed groups on the basis that the advanced students each head up separate groups. The implication is to allow less able students to mimic and imitate their peers and improve their comprehension, pronunciation, and vocabulary in and out of the classroom. Future research could be enhanced by other stakeholders, such as students and administrators, involvement.
Performance testing including the use of rating scales has become widespread in the evaluation of second/foreign oral language assessment. However, no study has used Multifaceted Rasch Measurement (MFRM) including the facets of test takers' ability, raters' severity, group expertise, and scale category, in one study. 20 EFL teachers scored the speaking performance of 200 test-takers prior and subsequent to a rater training program using an analytic rating scale consisting of fluency, grammar, vocabulary, intelligibility, cohesion, and comprehension categories. The outcome demonstrated that the categories were at different levels of difficulty even after the training program. However, this outcome by no means indicated the uselessness of the training program since data analysis reflected the constructive influence of training in providing enough consistency in raters' rating of each category of the rating scale at the post-training phase. Such an outcome indicated that raters could discriminate the various categories of the rating scale. The outcomes also indicated that MFRM can result in enhancement in rater training and functionality validation of the rating scale descriptors. The training helped raters use the descriptors of the rating scale more efficiently of its various band descriptors resulting in a reduced halo effect. The findings conveyed that stakeholders had better establish training programsto assist raters in better use of the rating scale categories of various levels of difficulty in an appropriate way. Further research could be done to make a comparative analysis between the outcome of this study and the one using a holistic rating scale in oral assessment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.