The literature provides a wide range of techniques to assess and improve the quality of data. Due to the diversity and complexity of these techniques, research has recently focused on defining methodologies that help the selection, customization, and application of data quality assessment and improvement techniques. The goal of this article is to provide a systematic and comparative description of such methodologies. Methodologies are compared along several dimensions, including the methodological phases and steps, the strategies and techniques, the data quality dimensions, the types of data, and, finally, the types of information systems addressed by each methodology. The article concludes with a summary description of each methodology
Quality of service (QoS) can be a critical element for achieving the business goals of a service provider, for the acceptance of a service by the user, or for guaranteeing service characteristics in a composition of services, where a service is defined as either a software or a software-support (i.e., infrastructural) service which is available on any type of network or electronic channel. The goal of this article is to compare the approaches to QoS description in the literature, where several models and metamodels are included. consider a large spectrum of models and metamodels to describe service quality, ranging from ontological approaches to define quality measures, metrics, and dimensions, to metamodels enabling the specification of quality-based service requirements and capabilities as well as of SLAs (Service-Level Agreements) and SLA templates for service provisioning. Our survey is performed by inspecting the characteristics of the available approaches to reveal which are the consolidated ones and which are the ones specific to given aspects and to analyze where the need for further research and investigation lies. The approaches here illustrated have been selected based on a systematic review of conference proceedings and journals spanning various research areas in computer science and engineering, including: distributed, information, and telecommunication systems, networks and security, and service-oriented and grid computing.
The increasing diffusion of services enabled by Internet of Things (IoT) technologies raises several risks associated to security and data quality. Together with the high number of heterogeneous interconnected devices, this creates scalability issues, thereby calling for a flexible middleware platform able to deal with both security threats and data quality issues in a dynamic IoT environment. In this paper a lightweight and cross-domain prototype of a distributed architecture for IoT is presented, providing minimum data caching functionality and in-memory data processing. A number of supporting algorithms for the assessment of data quality and security are presented and discussed. In the presented system, users can request services on the basis of a publish/subscribe mechanism, data from IoT devices being filtered according to users requirements in terms of security and quality. The prototype is validated in an experimental setting characterized by the usage of real-time open data feeds presenting different levels of reliability, quality and security
Web mashups are a new generation of applications based on the "composition" of ready-to-use services. In different contexts, ranging from the consumer Web to Enterprise systems, the potential of this new technology is to make users evolve from passive receivers of applications to actors actively involved in the "creation of innovation". Enabling end users to self-define applications that satisfy their situational needs is emerging as an important new requirement. In this paper, we address the current lack of lightweight development processes and environments and discuss models, methods, and technologies that can make mashups a technology for end user development.
Crowdsourcing enables one to leverage on the intelligence and wisdom of potentially large groups of individuals toward solving problems. Common problems approached with crowdsourcing are labeling images, translating or transcribing text, providing opinions or ideas, and similar -all tasks that computers are not good at or where they may even fail altogether. The introduction of humans into computations and/or everyday work, however, also poses critical, novel challenges in terms of quality control, as the crowd is typically composed of people with unknown and very diverse abilities, skills, interests, personal objectives and technological resources. This survey studies quality in the context of crowdsourcing along several dimensions, so as to define and characterize it and to understand the current state of the art. Specifically, this survey derives a quality model for crowdsourcing tasks, identifies the methods and techniques that can be used to assess the attributes of the model, and the actions and strategies that help prevent and mitigate quality problems. An analysis of how these features are supported by the state of the art further identifies open issues and informs an outlook on hot future research directions.
Modern Web 2.0 applications are characterized by high user involvement: users receive support for creating content and annotations as well as "composing" applications using content and functions from third parties. This latter phenomenon is known as Web mashups and is gaining popularity even with users who have few programming skills, raising a set of peculiar information quality issues. Assessing a mashup's quality, especially the information it provides, requires understanding how the mashup has been developed, how its components look alike, and how quality propagates from basic components to the final mashup application
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.