The COVID-19 outbreak is an international problem and has affected people and students all over the world. When lockdowns were imposed internationally, learning management systems began to be used more than in the previous period. These systems have been used also for traditional forms of learning and not only for online learning. This pandemic has highlighted the need for online learning systems in the educational environment, but it is very important for these systems to be secure and to verify the authenticity of the students when they access a course or evaluation questions. In this period, everything is moving towards the digital world, with students that are connected from a distance to online systems. All activities in the educational environment will soon be performed digitally on learning management systems, which includes also the evaluation process of the students. In this paper, we propose a secure learning management system that uses the student’s behavior to identify if they are an authentic student or not. This system can support the teacher’s activities in the learning process and verify the authenticity of the students logged on to the system. This paper is aimed at learning management system developers, who can use the proposed algorithms in their developed platforms, and also at teachers, who should understand the importance of the identification of students on these platforms.
The aim of this paper was to enhance the process of diagnosing and detecting possible vulnerabilities within an Internet of Things (IoT) system by using a named entity recognition (NER)-based solution. In both research and practice, security system management experts rely on a large variety of heterogeneous security data sources, which are usually available in the form of natural language. This is challenging as the process is very time consuming and it is difficult to stay up to date with the constant findings in the areas of security threats, vulnerabilities, attacks, countermeasures, and risks. The proposed system is conceived as a semantic indexing solution of existing vulnerabilities and serves as an information tool for security management experts. By integrating the proposed system, the users can easily discover the potential vulnerabilities of their IoT devices. The proposed solution integrates ontologies and NER techniques in order to obtain a high rate of automation with the scope of reaching a self-maintained and up-to-date system in terms of vulnerabilities and common exposures knowledge. To achieve this, a total of 312 CVEs (common vulnerabilities and exposures) specific to the IoT field were identified. CVEs are arguably one of the most important cybersecurity resources nowadays, containing information about the latest discovered vulnerabilities. This set is further used as data corpus for an NER model designed to identify the main entities and relations that are relevant to IoT security. The goal is to automatically monitor cybersecurity information relevant to IoT, and filter and present it in an organized and structured framework based on users’ needs. The taxonomies specific to IoT security are implemented via a domain ontology, which is later used to process natural language. Relevant tokens are marked as entities and the relations between them identified. The text analysis solution is connected to a gateway which scans the environment and identifies the main IoT devices and communication technologies. The strength of the approach proposed within this research is that the designed semantic gateway is using context-aware searches in the modeled IoT security database and can identify possible vulnerabilities before they can be exploited.
The present paper starts from a short introduction of the major aspects debated regarding plagiarism and author identification, along with the principles that are at the base of forming the property rights laws within the European community and the Anglo-American one. Regardless of the community involved, plagiarism is a form of using others research, as it is or modified, and presenting it as a personal creation. The terms of creativity and plagiarism are described in an antithesis analysis, reaching to the concept of originality, defined as a property that a creative research paper has when the ideas presented within in are different from the ones already published by different authors. A metric is implemented in order to obtain a measurable value in determining the level of originality of a paper. The main ways of testing a paper of plagiarism, intrinsic and external analysis, are described for choosing the proper methodology for determining originality of scientific papers. The research leads to the stylometric analysis, a field found at the crossroad of plagiarism, originality and author identification. This stylometric analysis is done within the intrinsic plagiarism detection and is formed on the bases of a number of metrics that describe unique a writing style of a specific author. The testing platform implies using a set of research papers written by European authors and extracting the values of eight writing style metrics. A clustering is applied and the best combination of metrics is resulted.
The term of word sense disambiguation, WSD, is introduced in the context of text document processing. A knowledge based approach is conducted using WordNet lexical ontology, describing its structure and components used for the process of identification of context related senses of each polysemy words. The principal distance measures using the graph associated to WordNet are presented, analyzing their advantages and disadvantages. A general model for aggregation of distances and probabilities is proposed and implemented in an application in order to detect the context senses of each word. For the non-existing words from WordNet, a similarity measure is used based on probabilities of co-occurrences. The module of WSD is proposed for integration in the step of processing documents such as supervised and unsupervised classification in order to maximize the correctness of the classification. Future work is related to the implementation of different domain oriented ontologies.
Processing large volumes of data is still a performance or a cost issue for many business and also for research, despite current developments in hardware, the increase availability of different Cloud Services or the growing implementation of solutions from different fields like Machine Learning or Artificial Intelligence. The reason is simple, business and clients produce more and more data exceeding too fast the current processing capacity or the cost limits of an owner to process it. In this paper we address the problem of providing data processing capabilities to universities or other institutions that lack a proper infrastructure or required budgets but have access to large volunteer communities that can share their devices and their computation power to form ad-hoc data processing networks for limited time. In order to overcome problems common to the management of large networks and to the distribution of cross-platform software clients we propose a decentralized architecture that will use Internet technologies as the ground base for running Web-based applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.