This paper presents a detailed comparative analysis of the three types of eLearning (synchronous, asynchronous and hybrid) and the traditional onsite learning. The analysis considers the opinions of both teachers (the three of us) and students. We have surveyed our students twice. First time, at the end of the winter semester (end of January) - right before the Covid-19 outbreak in Europe. Then we did it again at the end of the summer semester (end of May) that coincides with the end of the first peak of the first pandemic wave in Europe. Since the first survey evaluates mostly students' attitude, the second one evaluates their real-life experience. Young people nowadays belong to the so called digital generation, so we expected that students are actually happier with eLearning rather than the traditional onsite learning. But surprisingly their opinion about pros and cons of eLearning closely resembles ours.
This paper is a continuation of the one entitled "An algorithm for automatic assignment of reviewers to papers" [1] published in CompSysTech 2006 Conference Proceedings. The main aim of the present paper is to outline the results of the analysis and experimental study of the suggested algorithm [1]. It has been compared in terms of accuracy and running time to the maximum-weighted matching algorithm of Kuhn and Munkres (also known as the Hungarian Algorithm) implemented in The MyReview System [3].Key words: automatic assignment of reviewers to papers, conference management system, the assignment problem, matching in bipartite graphs.
This article focuses on the importance of the precise calculation of similarity factors between papers and reviewers for performing a fair and accurate automatic assignment of reviewers to papers. It suggests that papers and reviewers' competences should be described by taxonomy of keywords so that the implied hierarchical structure allows similarity measures to take into account not only the number of exactly matching keywords, but in case of non-matching ones to calculate how semantically close they are. The paper also suggests a similarity measure derived from the well-known and widely-used Dice's coefficient, but adapted in a way it could be also applied between sets whose elements are semantically related to each other (as concepts in taxonomy are). It allows a non-zero similarity factor to be accurately calculated between a paper and a reviewer even if they do not share any keyword in common.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.