The fairness and precision of evaluation of Oral Presentations of students by university professionals have become a debatable subject. The effectiveness of the evaluation of PowerPoint presentations was seriously questioned by the students due to its unreliability of scoring procedure. Therefore, it's important to establish a planned evaluation system for oral presentation based on PowerPoint, to guarantee the fairness for every student. To minimize the potential biases, most of the universities presently adopt Objective Structured Evaluation systems to enhance the transparency and the reliability of the assessments. In view of that, the present study analysed the biasness of assessing the oral presentations of a student cohort of a university. For this study, mean score of each student received from each examiner was taken. Single-factor ANOVA tests were conducted to analyse variances to compare three examiner groups; professors, senior lecturers and probationary lecturers. Tukey simultaneous test was conducted to identify mean differences in each comparison. Strong evidence of differences among the three examiner groups was present. Within the most senior level of professionals, a greater degree of variance was also identified. In addition, there is a variance within the senior lecturer group while the probationary lecturer group did not reflect any significant variance. In conclusion, our findings demonstrated statistically significant differences in the marks awarded for the PowerPoint presentations of undergraduates as influenced by examiners' experience and seniority both in between examiners and within the same level of examiners.