PurposeThe purpose of this paper is to explore the assessment methods used in higher education to assess students' learning, and to investigate the effects of college and grading system on the used assessment methods.Design/methodology/approachThis descriptive study investigates the assessment methods used by teachers in higher education to assess their students' learning outcomes. An instrument consisting of 15 items (each item is an assessment method) was distributed to 736 undergraduate students from four public universities in Jordan.FindingsFindings show that traditional paper‐pencil test is the most common method that is used to assess learning in higher education. Results also show that teachers in colleges of science and engineering and colleges of nursing use different assessment methods to assess learning, besides traditional testing such as: real life tasks (authentic assessment), papers, and projects. Also, the results show that teachers use the same assessment methods to assess learning, despite the grading systems (letter or numbers) used at their institutes.Research limitations/implicationsThe sample of the study was limited to undergraduate students and teachers' points of views about the frequent use of assessment methods were not studied.Practical implicationsHigher education institutes should encourage teachers to use new and modern assessment methods as well as traditional paper‐pencil testing, and study the reasons for not using these new methods.Originality/valueThe paper should alert the higher education institutes about the important of developing the assessment process, through knowing their students' points of view about the assessment methods. This will help to get students involved in the learning process.
Purpose – This study aims to utilized the item response theory (IRT) rating scale model to analyze students’ perceptions of assessment practices in two universities: one in Jordan and the other in the USA. The sample of the study consisted of 506 university students selected from both universities. Results show that the two universities still focus on paper-pencil testing to assess students’ learning outcomes. The study recommends that higher education institutes should encourage their teachers to use different assessment methods to assess students’ learning outcomes. Design/methodology/approach – The convenience sample consisted of 506 selected university students from the USA and Jordan, and participants were distributed according to their educational levels, thus: 83 freshmen, 139 sophomores, 157 juniors and 59 seniors. (Note: some students from both universities did not report their gender and/or their educational level). The USA university sample consisted of 219 students from three colleges at a major university in the southeast of the USA studying for arts and sciences, education and commerce and business qualifications, of whom 43 were males and 173 were females. The study used the Students Perception of Assessment Practices Inventory developed by Alquraan (2007), and for the purpose of this study, the RUMM2020 program was used for its rating scale model. Findings – Both universities, in Jordan and the USA, still focus more on the developmental (construction of assessment tasks), organizational and planning aspects of assessment processes than they do on assessments of learning and assessment methods (traditional and new assessment methods). The assessment practices that are used frequently in both universities based on the teachers sampled are: “(I27) I know what to study for the test in this class”, “(I6) Teacher provides a good environment during test administration” and “(I21) My teacher avoids interrupting students as they are taking tests”. This indicates that teachers in the selected universities have a tendency to focus on the administrative and communicative aspects of assessment (e.g. providing a good environment during test administration) more than on using different assessment methods (e.g. portfolios, new technology, computers, peer and self-assessment) or even using assessment practices that help students learn in different ways (e.g. assessing students’ prior knowledge and providing written feedback on the graded tests). Originality/value – This is a cross-cultural study focus assessment of students learning in higher education.
Purpose The purpose of this paper is to investigate the effect of insufficient effort responding (IER) on construct validity of student evaluations of teaching (SET) in higher education. Design/methodology/approach A total of 13,340 SET surveys collected by a major Jordanian university to assess teaching effectiveness were analyzed in this study. The detection method was used to detect IER, and the construct (factorial) validity was assessed using confirmatory factor analysis (CFA) and principal component analysis (PCA) before and after removing detected IER. Findings The results of this study show that 2,160 SET surveys were flagged as insufficient effort responses out of 13,340 surveys. This figure represents 16.2 percent of the sample. Moreover, the results of CFA and PCA show that removing detected IER statistically enhanced the construct (factorial) validity of the SET survey. Research limitations/implications Since IER responses are often ignored by researchers and practitioners in industrial and organizational psychology (Liu et al., 2013), the results of this study strongly suggest that higher education administrations should give the necessary attention to IER responses, as SET results are used in making critical decisions Practical implications The results of the current study recommend universities to carefully design online SET surveys, and provide the students with clear instructions in order to minimize students’ engagement in IER. Moreover, since SET results are used in making critical decisions, higher education administrations should give the necessary attention to IER by examining the IERs rate in their data sets and its consequences on the data quality. Originality/value Reviewing the related literature shows that this is the first study that investigates the effect of IER on construct validity of SET in higher education using an IRT-based detection method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.