A promising research area in the field of Group Decision Making (GDM) is the study of interpersonal influence and its impact on the evolution of experts' opinions. In conventional GDM models, a group of experts express their individual preferences on a finite set of alternatives, then preferences are aggregated and the best alternative, satisfying the majority of experts, is selected. Nevertheless, in real situations, experts form their opinions in a complex interpersonal environment where preferences are liable to change due to social influence. In order to take into account the effects of social influence during the GDM process, we propose a new influence-guided GDM model based on the following assumptions: experts influence each other and the more an expert trusts in another expert, the more his opinion is influenced by that expert. The effects of social influence are especially relevant to cases when, due to domain complexity, limited expertise or pressure to make a decision, an expert is unable to express preferences on some alternatives, i.e. in presence of incomplete information. The proposed model adopts fuzzy rankings to collect both experts' preferences on available alternatives and trust statements on other experts. Starting from collected information, possibly incomplete, the configuration and the strengths of interpersonal influences are evaluated and represented through a Social Influence Network (SIN). The SIN, in its turn, is used to estimate missing preferences and evolve them by simulating the effects of experts' interpersonal influence before aggregating them for the selection of the best alternative. The proposed model has been experimented with synthetic data to demonstrate the influence driven evolution of opinions and its convergence properties.
This paper presents an approach to automatic course generation and student modeling. The method has been developed during the European funded projects Diogene and Intraserv, focused on the construction of an adaptive e-learning platform. The aim of the platform is the automatic generation and personalization of courses, taking into account pedagogical knowledge on the didactic domain as well as statistic information on both the student's knowledge degree and learning preferences. Pedagogical information is described by means of an innovative methodology suitable for effective and efficient course generation and personalization. Moreover, statistic information can be collected and exploited by the system in order to better describe the student's preferences and learning performances. Learning material is chosen by the system matching the student's learning preferences with the learning material type, following a pedagogical approach suggested by Felder and Silverman. The paper discusses how automatic learning material personalization makes it possible to facilitate distance learning access to both able-bodied and disabled people. Results from the Diogene and Intraserv evaluation are reported and discussed
Astronomical wide‐field imaging performed with new large‐format CCD detectors poses data reduction problems of unprecedented scale, which are difficult to deal with using traditional interactive tools. We present here NExt (Neural Extractor), a new neural network (NN) based package capable of detecting objects and performing both deblending and star/galaxy classification in an automatic way. Traditionally, in astronomical images, objects are first distinguished from the noisy background by searching for sets of connected pixels having brightnesses above a given threshold; they are then classified as stars or as galaxies through diagnostic diagrams having variables chosen according to the astronomer's taste and experience. In the extraction step, assuming that images are well sampled, NExt requires only the simplest a priori definition of ‘what an object is’ (i.e. it keeps all structures composed of more than one pixel) and performs the detection via an unsupervised NN, approaching detection as a clustering problem that has been thoroughly studied in the artificial intelligence literature. The first part of the NExt procedure consists of an optimal compression of the redundant information contained in the pixels via a mapping from pixel intensities to a subspace individualized through principal component analysis. At magnitudes fainter than the completeness limit, stars are usually almost indistinguishable from galaxies, and therefore the parameters characterizing the two classes do not lie in disconnected subspaces, thus preventing the use of unsupervised methods. We therefore adopted a supervised NN (i.e. a NN that first finds the rules to classify objects from examples and then applies them to the whole data set). In practice, each object is classified depending on its membership of the regions mapping the input feature space in the training set. In order to obtain an objective and reliable classification, instead of using an arbitrarily defined set of features we use a NN to select the most significant features among the large number of measured ones, and then we use these selected features to perform the classification task. In order to optimize the performance of the system, we implemented and tested several different models of NN. The comparison of the NExt performance with that of the best detection and classification package known to the authors (SExtractor) shows that NExt is at least as effective as the best traditional packages.
Nowadays, Artificial Intelligence (AI) is widely applied in every area of human being's daily life. Despite the AI benefits, its application suffer from the opacity of complex internal mechanisms and doesn't satisfy by design the principles of Explainable Artificial Intelligence (XAI). The lack of transparency further exacerbates the problem in the field of Cybersecurity because entrusting crucial decisions to a system that cannot explain itself presents obvious dangers. There are several methods in the literature capable of providing explainability of AI results. Anyway, the application of XAI in Cybersecurity can be a doubleedged sword. It substantially improves the Cybersecurity practices but simultaneously leaves the system vulnerable to adversary attacks. Therefore, there is a need to analyze the state-of-the-art of XAI methods in Cybersecurity to provide a clear vision for future research. This study presents an in-depth examination of the application of XAI in Cybersecurity. It considers more than 300 papers to comprehensively analyze the main Cybersecurity application fields, like Intrusion Detection Systems,Malware detection, Phishing and Spam detection, BotNets detection, Fraud detection, Zero-Day vulnerabilities, Digital Forensics and Crypto-Jacking . Specifically, this study focuses on the explainability methods adopted or proposed in these fields, pointing out promising works and new challenges.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.