Abstract-In this paper a new approach is presented to model interval-based data using Fuzzy Sets (FSs). Specifically, we show how both crisp and uncertain intervals (where there is uncertainty about the endpoints of intervals) collected from individual or multiple survey participants over single or repeated surveys can be modelled using type-1, interval type-2, or general type-2 FSs based on zSlices. The proposed approach is designed to minimise any loss of information when transferring the intervalbased data into FS models, and to avoid, as much as possible assumptions about the distribution of the data. Furthermore, our approach does not rely on data pre-processing or outlier removal which can lead to the elimination of important information. Different types of uncertainty contained within the data, namely intra-and inter-source uncertainty, are identified and modelled using the different degrees of freedom of type-2 FSs, thus providing a clear representation and separation of these individual types of uncertainty present in the data. We provide full details of the proposed approach, as well as a series of detailed examples based on both real-world and synthetic data. We perform comparisons with analogue techniques to derive fuzzy sets from intervals, namely the Interval Approach (IA) and the Enhanced Interval Approach (EIA) and highlight the practical applicability of the proposed approach.
Abstract-In this paper we describe a method of using interval valued survey responses from multiple experts on multiple occassions to produce General Type-2 fuzzy sets. In the method we propose, both the intra-and inter-person variability are modelled, with no loss of information. The resulting sets are completely determined by the data, providing an accurate representation (in terms of being defined solely by the data) of the opinions being modelled. A description of the method is provided, along with synthetic and real-world numeric examples and a comparison to an alternative method proposed in [1].
Abstract-In this paper we explore the practical application of the previously introduced approach [1] to generate fuzzy sets from interval-valued data. We demonstrate two specific example applications where we 1) generate type-1 fuzzy sets from intervalvalued survey data for both words (e.g., neutral, excellent) and concepts (e.g., ambience, food) and 2) generate zSlices based general type-2 fuzzy set valued data from multiple iterations of a survey. We highlight the need for the simultaneous rating of both concepts and words in order to maintain context (including timeliness) of the resulting models. Further, in both example applications, we demonstrate using the Jaccard similarity measure how similarity measures can be employed to both relate and attribute word models to concept models (e.g., excellent food) and compare different concepts directly for different contexts (e.g., ambience in venue A vs. ambience in venue B). We provide interpretations for the resulting word/concept models and similarity values and highlight their utility, for example, for the data-driven generation of linguistic descriptions of venues. Finally, we highlight remaining questions and challenges both in technical terms and in application terms.
Abstract-In this paper, we describe the main functionality of an initial version of a new fuzzy logic software toolkit based on the R language. The toolkit supports the implementation of several types of fuzzy logic inference systems and we discuss and present several aspects of its capabilities to allow the straightforward implementation of type-1 and interval type-2 fuzzy systems. We include source code examples and visualizations both of type-1 and type-2 fuzzy sets as well as output surface visualizations generated using the R toolkit. Finally, we describe the significant benefits of relying on the R language as a language which is employed across several research disciplines (thus enabling access to fuzzy logic tools to a variety of researchers), outline future developments and most importantly call for contributions, comments and feedback to/on this open-source software development effort.
(2016) Modelling cybersecurity experts' decision making processes using aggregation operators. Computers and Security, Access from the University of Nottingham repository: http://eprints.nottingham.ac.uk/35868/7/1-s2.0-S016740481630089X-main.pdf Copyright and reuse:The Nottingham ePrints service makes this work by researchers of the University of Nottingham available open access under the following conditions. This article is made available under the Creative Commons Attribution licence and may be reused according to the conditions of the licence. For more details see: http://creativecommons.org/licenses/by/2.5/ A note on versions:The version presented here may differ from the published version or from the version of record. If you wish to cite this item you are advised to consult the publisher's version. Please see the repository url above for details on accessing the published version and note that access may require a subscription. An important role carried out by cyber-security experts is the assessment of proposed computer systems, during their design stage. This task is fraught with difficulties and uncertainty, making the knowledge provided by human experts essential for successful assessment. Today, the increasing number of progressively complex systems has led to an urgent need to produce tools that support the expert-led process of system-security assessment. In this research, we use weighted averages (WAs) and ordered weighted averages (OWAs) with evolutionary algorithms (EAs) to create aggregation operators that model parts of the assessment process.We show how individual overall ratings for security components can be produced from ratings of their characteristics, and how these individual overall ratings can be aggregated to produce overall rankings of potential attacks on a system. As well as the identification of salient attacks and weak points in a prospective system, the proposed method also highlights which factors and security components contribute most to a component's difficulty and attack ranking respectively. A real world scenario is used in which experts were asked to rank a set of technical attacks, and to answer a series of questions about the security components that are the subject of the attacks. The work shows how finding good aggregation operators, and identifying important components and factors of a cyber-security problem can be automated. The resulting operators have the potential for use as decision aids for systems designers and cyber-security experts, increasing the amount of assessment that can be achieved with the limited resources available.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.