Text classification which is an integral part of text mining has caught much attention in various industries and fields recently. The ability is in assigning text documents to one or more pre-defined categories based on content similarity. While most of application of text classification focuses on document level, question classification works at much granular level such as sentence and phrase. There have been numerous studies on question classification in accordance to Bloom taxonomy in assessments to measure cognitive level of learners in higher learning institutions. But it has not been effective yet to resolve overlapping issue of Bloom taxonomy verb keywords being assigned to more than one category of Bloom taxonomy. The presence of this poses a problem in respect of classifying a particular question into a right category of Bloom taxonomy. And feature extraction plays an important role in improving the accuracy of classifier such as Support Vector Machine in question classification. Much of the past related research work focus on feature extraction methods such as bag of word (BOW) and syntactic analysis to classify questions and to address the issue, an improvement in feature extraction is needed. In view of this, this study proposes an integrated approach in feature extraction involving semantic aspect in classifying questions in accordance to Bloom taxonomy. Support Vector Machine classifier is used as it is well known for its high accuracy in text classification. With all this in place, an improved accuracy in classifying questions in accordance to Bloom taxonomy can be expected.
Examination is one of the common ways to evaluate the students’ cognitive levels in higher education institutions. Exam questions are labeled manually by educators in accordance with Bloom’s taxonomy cognitive domain. To ease the burden of the educators, several past research works have proposed the automated question classification based on Bloom’s taxonomy using the machine learning technique. Feature selection, feature extraction and term weighting are common ways to improve the accuracy of question classification. Commonly used term weighting method in the past work is unsupervised namely TF and TF-IDF. There are several variants of TF and TFIDF and the most optimal variant has yet to be identified in the context of question classification based on BT. Therefore, this paper aims to study the TF, TF-IDF and normalized TF-IDF variants and identify the optimal variant that can enhance the exam question classification accuracy. To investigate the variants two different classifiers were used, which are Support Vector Machine (SVM) and Naïve Bayes. The average accuracies achieved by TF-IDF and normalized TF-IDF variants using SVM classifier were 64.3% and 72.4% respectively, while using Naïve Bayes classifier the average accuracies for TF-IDF and normalized TF-IDF were 61.9% and 63.0% respectively. Generally, the normalized TF-IDF variants outperformed TF and TF-IDF variants in accuracy and F1-measure respectively. Further statistical analysis using t-test and Wilcoxon Signed also shows that the differences in accuracy between normalized TF-IDF and TF, TF-IDF are significant. The findings from this study show that the Normalized TF-IDF3 variant recorded the highest accuracy of 74.0% among normalized TF-IDF variants. Also, the differences in accuracy between Normalized TF-IDF3 and other normalized variants are generally significant, thus the optimal variant is Normalized TF-IDF3. Therefore, the normalized TF-IDF3 variant is important for benchmarking purposes, which can be used to compare with other term weighting techniques in future work.
Bloom’s Taxonomy (BT) is widely used in educational institutions to produce high-quality exam papers to evaluate students’ knowledge at different cognitive levels. However, manual question labeling takes a long time, and not all evaluators are familiar with BT. The researchers worked to automate the exam question classification process based on BT as a solution. Enhancement in term weighting is one of the ways to increase classification accuracy while working with text data. However, all the past work on the term weighting in exam question classification focused on unsupervised term weighting (USTW) schemes. The supervised term weighting (STW) schemes showed effectiveness in text classification but were not addressed in past studies of exam question classification. As a result, this study focused on the effectiveness of STW in classifying exam questions using BT. Hence, this research performed a comparative analysis between the USTW schemes and STW for exam question classification. The STW schemes used in this study are TF-ICF, TF-IDF-ICF, and TF-IDF-ICSDF, whereas the USTW schemes used for comparison are TF-IDF, ETF-IDF, and TFPOS-IDF. This study used Support Vector Machines (SVM), Na¨ıve Bayes (NB), and Multilayer Perceptron (MLP) to train the model. Accuracy and F1 score were used in this study to evaluate the classification result. The experiment result showed that overall, the STW scheme TF-ICF outperformed all the other schemes, followed by the USTW scheme ETF-IDF. Both the ETF-IDF and TFPOS-IDF outperformed standard TFIDF. The outcome of this study indicates the future research direction where the combination of STW and USTW schemes may increase the Accuracy of BT-based exam question classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.