Abstract:This study aimed at investigating the quality of 40-item multiple-choice taken from Reading III class in English Education Department. It was carried out in order to get clearer picture and provide feedbacks on how the Critical Reading test for the new curriculum should be developed and what factors should be taken into account when developing the test. The data were taken from 24 fourth semester students in English Education Department, Unidksha, Bali who took the reading test. The data were analyzed … Show more
“…42% of the distractors functioned while 58% did not non-function. The findings reported in this study on the easiness of the test and inefficiency of distractors and discrimination indices in many items are in line with the findings by Paramartha's (2017) that revealed that the EFL reading test taken by a sample of Indonesian school students was easy, and less than half of the test items had eligibility.…”
The significance of this study is to improve the assessment quality of future online English as a Foreign Language (EFL) writing tests through Blackboard, a learning management system (LMS), and to avert any potential inclusion of odd items. This study aims to examine the Blackboard (Bb) test quality at the Preparatory Year Program (PYP) in an English for Specific Purposes (ESP) writing course using item analysis and a questionnaire of EFL teachers’ practices for constructing a good quality test. To achieve the study objectives, 30 objective-type questions from the final Technical Writing course examination, attempted by 97 level two preparatory year students, were analyzed to check three indices: difficulty index, discrimination index, and distractor efficiency. In addition, a questionnaire was administered to rate the EFL teachers’ (N=50) practices of constructing their technical writing test for the final examination in terms of good quality test norms. The item analysis has shown that the test proved to be valid and reliable; however, it was easy. Many items had no discrimination indices, and many distractors were not functioning. The analysis of the questionnaire data showed that there was a high level of commitment by the EFL teachers to apply the required norms for constructing a good quality language test. Genders and teaching experience had no significant differences in the EFL teachers’ degree of the test norms employment. In the light of the findings, recommendations and further research are suggested.
Keywords: Blackboard, evaluation, good quality test, Saudi EFL writing context, teacher-made test
“…42% of the distractors functioned while 58% did not non-function. The findings reported in this study on the easiness of the test and inefficiency of distractors and discrimination indices in many items are in line with the findings by Paramartha's (2017) that revealed that the EFL reading test taken by a sample of Indonesian school students was easy, and less than half of the test items had eligibility.…”
The significance of this study is to improve the assessment quality of future online English as a Foreign Language (EFL) writing tests through Blackboard, a learning management system (LMS), and to avert any potential inclusion of odd items. This study aims to examine the Blackboard (Bb) test quality at the Preparatory Year Program (PYP) in an English for Specific Purposes (ESP) writing course using item analysis and a questionnaire of EFL teachers’ practices for constructing a good quality test. To achieve the study objectives, 30 objective-type questions from the final Technical Writing course examination, attempted by 97 level two preparatory year students, were analyzed to check three indices: difficulty index, discrimination index, and distractor efficiency. In addition, a questionnaire was administered to rate the EFL teachers’ (N=50) practices of constructing their technical writing test for the final examination in terms of good quality test norms. The item analysis has shown that the test proved to be valid and reliable; however, it was easy. Many items had no discrimination indices, and many distractors were not functioning. The analysis of the questionnaire data showed that there was a high level of commitment by the EFL teachers to apply the required norms for constructing a good quality language test. Genders and teaching experience had no significant differences in the EFL teachers’ degree of the test norms employment. In the light of the findings, recommendations and further research are suggested.
Keywords: Blackboard, evaluation, good quality test, Saudi EFL writing context, teacher-made test
“…Likewise, a study conducted by Kholilah (2017) revealed that even more than half of all the test items had poor discrimination ability which could hardly inform who had a better and who had less ability. Paramartha (2017), on the same token, found many items on the examined had low discrimination power. On the contrary, this present study resonates very well with the study by Lebagi et al (2017) who uncovered that most items of the teachermade English test under investigation proved to be strong in discriminating the test takers on their ability.…”
Section: Discrimination Ability Of Individual Itemmentioning
confidence: 93%
“…The finding of this degree of construct validity of a teacher-made English test neither confirms nor contrasts the findings of other studies. Other studies in the field in Indonesian context, to the author's knowledge, did not seek to examine the construct validity of the test they investigated (Hakam & Irhamsyah, 2020;Indrayani et al, 2020;Jannah et al, 2021;Kholilah, 2016;Lebagi et al, 2017;Paramartha, 2017;Santy et al, 2020;Septi et al, 2020). It is with Conquest application that allows the generation of the degree of construct validity of the test.…”
Section: Construct Validitymentioning
confidence: 99%
“…The weak aspects, however, indicate that again test development expertise of the teacher is not yet sufficient. Similarly, a study conducted in an English Education program in Bali by Paramartha (2017) discovered that an English reading teachermade test could hardly discriminate among students with different levels of mastery. Almost half of all the items mostly require removal and some revision.…”
Background: Multiple-choice, teacher-made English tests have constantly been popular due to their immediate alignment to classroom instructions. However, ample studies have indicated the need for continuous evaluation of their quality to allow evidence-based feedbacks for sustained betterment of assessment practices.
Purpose: This study sought to examine the quality of a multiple-choice, teacher-made English formative informal assessment for four classes of high school students of an English course in Madura, Indonesia.Design and methods: Data were collected from the test results of eighty students and put in an excel document. The data were then analysed with a computer application called Conquest to analyse the responses of each of the students on every item of the test. Based on this item response analysis, it turned out that the test could have achieved a higher credibility if necessary, moderations had been taken.
Results: The findings recommend that schools as well as teacher institutions need to provide necessary trainings to ensure in-service teachers and pre-service teachers possess adequate test development and test analysis expertise for continuous improvement of the learning, teaching and assessment practices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.