The availability of a dataset represents a critical component in educational data mining (EDM) pipelines. Once the dataset is at hand, the next steps within the research methodology regard proper research issue formulation, data analysis pipeline design and implementation and, finally, presentation of validation results. As the EDM research area is continuously growing due to the increasing number of available tools and technologies, one of the critical issues that constitute a bottleneck regards a properly documented review on publicly available datasets. This paper aims to present a succinct, yet informative, description of the most used publicly available data sources along with their associated EDM tasks, used algorithms, experimental results and main findings. We have found that there are three types of data sources: well‐known data sources, datasets used in EDM competitions and standalone EDM datasets. We conclude that the success of the future of EDM data sources will rely on their ability to manage proposed approaches and their experimental results as a dashboard of benchmarked runs. Under these circumstances, the reproducibility of data analysis pipelines and benchmarking of proposed algorithms becomes at hand for the research community such that progress in the EDM domain may be much more easily acquired. The most crucial outcome regards the possibility of continuously improving existing data analysis pipelines by tackling EDM tasks that rely on publicly available datasets and benchmarking data analysis pipelines that use open‐source implementations.
This article is categorized under:
Application Areas > Education and Learning
Fundamental Concepts of Data and Knowledge > Big Data Mining
Plagiarism detection represents an application domain for the NLP research area,
which has not been investigated too much by researchers in the context of lately developed attention mechanism and sentence transformers. In this paper, we present a plagiarism detection approach which uses state-of-the-art deep learning techniques in order to provide more accurate results than classical plagiarism detection techniques. This approach goes beyond classical word searching and matching, which is time-consuming and can be easily cheated because it uses attention mechanisms and aims for text encoding and contextualization. In order to get proper insight regarding the system, we investigate three approaches in order to be sure that the results are relevant and well-validated. The experimental results show that the systems that use BERT pre-trained model offers the best results and outperforms GloVe and RoBERTa
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.