This study concerns with the major objective of foreign language teaching: vocabulary acquisition. The modern trends of teaching and the vast advance of technology enable teachers to use online and mobile applications in a very wide range. The real effect of using such a method need to be measured. Accordingly, this experimental-design study investigated the effect of Quizlet, a rapidly growing application with an online and mobile phone version, on vocabulary acquisition. Two groups of low-level EFL learners at Prince Sattam Bin Abdulaziz University in Saudi Arabia ( N = 42) participated in the study. Each group underwent a pretest and a posttest to assess their acquisition of the assigned vocabulary lessons that were extracted from their syllabus. After using Quizlet for vocabulary learning for a month, the experimental group participants show a significant improvement in vocabulary posttest. Accordingly, the study acknowledges and recommends using the application at university level
This paper investigates the accuracy order of grammatical morphemes followed by Saudi EFL learners. The major aim of the research was to reveal the pattern of grammatical morpheme acquisition of the participants, and to compare it against the Natural Order Hypothesis (NOH) as stated by Krashen (1977). The different factors that affected the generated order were also discussed. Research Methods: The present research adopted a descriptive quantitative design. Three groups of male and female students (n= 258) participated in the study. The participants were selected randomly from public schools and university colleges. They responded to a grammar elicitation task designed to test their accuracy of using grammatical morphemes.
The positive effect of Corrective Feedback (CF) on students’ writing performance has been a topic of dispute for a long time. The dominant belief around this issue is that students benefit from feedback to a certain extent. However, there is no consensus on what type of feedback can achieve that or the effect of CF provision on language learning in general. This study investigates the impact of Automated Written Corrective Feedback (AWCF), namely Grammarly AI-powered writing assistant, on students’ academic writing accuracy. After being allotted to control and experimental groups, sixty-four university students participated in the study. The participants underwent a pre-test to validate their homogeneity and levels and a post-test to explore the effect of using Grammarly on the written work of the experimental group. The main finding of the research is that after 14 weeks of using Grammarly, the experimental group members showed a significant improvement in their written accuracy compared to the control members. Moreover, it was found that the progress was represented in a substantial drop in the number of errors pertaining to specific categories while errors of other types remained unaffected. The implications of the findings are discussed, and suggestions for further research are presented.
The importance of Corrective Feedback (CF) to language learners has been a controversial topic for a long time. While some studies recognised CF's importance for accurate language use, others considered it deterrent to the meaningful acquisition of a second language. Recently, modern types of corrective feedback that utilise the vast advance in IT and Artificial Intelligence (AI) have emerged. This advancement has opened new investigation areas. Up to now, researchers have acknowledged the role of Automated Written Evaluation (AWE) in enhancing students’ writing and motivating them. Other studies have focused on students’ and teachers’ perceptions of such tools. However, the particular variance between this type of CF and the traditional one is still an area to explore. Accordingly, the present study aimed to compare CF provided by teachers to that offered by a well-known writing assistant Grammarly. The descriptive design was used to analyse the CF instances provided by five college professors to the Grammarly suggestions on a corpus of 115 texts, 23700 words, written by college students. The descriptive statistics method was adopted to summarise the findings. The study's main results indicated no significant difference in the number of errors detected by the two techniques. However, human raters outperformed Grammarly in detecting grammatical errors and were more accurate in identifying structure-related mistakes. On the other hand, Grammarly was found more effective in detecting errors related to spelling and punctuation. These findings imply using focused CF to exploit both methods. Teachers can implement their regular CF approach to develop structural aspects of language. Further, they can encourage students to adopt sophisticated writing assistants to develop their writing mechanics. To account for the potential limitations of the current study, further research that employs a larger sample size and is conducted on longitudinal and experimental bases is required.
Many factors should be considered when planning a profitable Online Learning (OL) experience. Of these factors, quality is the most noticeable concern that received considerable debate. Over the years, several suggestions for standards for ensuring online course quality have been suggested. Among these, Quality Matters (QM) is the most used and principally accepted rubric for quality assurance. Much research explored its potential and impact on maintaining online course quality, yet more research is needed to parallel the expansion of online learning post-COVID-19 pandemic. Additionally, as more students are involved in fully OL classes, it is perceived that their perceptions of QM would be more authentic as they are stemmed from actual experience. To this end, the present study explores students’ perspectives towards QM rubrics as a benchmark for measuring OL course quality. The study adopted a mixed method where quantitative data were gathered by surveying 112 university students using a QM-based questionnaire of 42 items. Using average scores of the participant responses to the questionnaire, the researcher compared their evaluation to the QM general and specific standards. Furthermore, focus-group interviews were conducted to validate and justify the quantitative data. Frequencies of mentioning the most and least important standards were calculated. The findings revealed that the participants agreed to 71% of the QM rubrics. On the other hand, they overvalued standards related to learners’ privacy, course introduction, assessment, and course technology while undervalued standards associated with learning objectives, learner support and accessibility. The participants’ justifications for their judgments revolved around the importance of privacy in cyberspace, the vitality of online assessment tools, and their familiarity with the new technologies that made IT support a secondary standard for them. These results imply reconsidering OL course quality by focusing more on using variable technologies and tools that engage students in the experience, ensure their privacy, and facilitate their interaction with the course content. Further research that utilises larger samples and involves QM-based OL courses is suggested to support the present findings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.