Data quality and especially the assessment of data quality have been intensively discussed in research and practice alike. To support an economically oriented management of data quality and decision-making under uncertainty, it is essential to assess the data quality level by means of well-founded metrics. However, if not adequately defined, these metrics can lead to wrong decisions and economic losses. Therefore, based on a decision-oriented framework, we present a set of five requirements for data quality metrics. These requirements are relevant for a metric that aims to support an economically oriented management of data quality and decision-making under uncertainty. We further demonstrate the applicability and efficacy of these requirements by evaluating five data quality metrics for different data quality dimensions. Moreover, we discuss practical implications when applying the presented requirements.
Cloud computing promises the flexible delivery of computing services in a pay-as-you-go manner. It allows customers to easily scale their infrastructure and save on the overall cost of operation. However Cloud service offerings can only thrive if customers are satisfied with service performance. Allowing instantaneous access and flexible scaling while maintaining the service levels and offering competitive prices poses a significant challenge to Cloud Computing providers. Furthermore services will remain available in the long run only if this business generates a stable revenue stream. To address these challenges we introduce novel policy-based service admission control models that aim at maximizing the revenue of Cloud providers while taking informational uncertainty regarding resource requirements into account. Our evaluation shows that policy-based approaches statistically significantly outperform first come first serve approaches, which are still state of the art. Furthermore the results give insights in how and to what extent uncertainty has a negative impact on revenue.
Annual reports published by companies contain important insights regarding their performance and are often analyzed in a manual, subjective manner. We address this point by combining the streams of research on text summarization and topic modelling with the one on sentiment analysis. Our approach consists of the steps of text summarization using BERTSUMEXT, topic modelling with LDA, sentiment analysis with FinBERT, and performance prediction with Decision Trees and Random Forest. The result provides decision makers with an interpretable and condensed representation of the content of annual reports, together with its relationship to future company performance. We evaluate our approach on 10-K reports, demonstrating both its interpretability for analysts and explanatory power regarding future company performance.
Stored information, used to support decision-making, can be outdated. Existing metrics for currency provide an indication about the correspondence between this stored information and its real-world counterpart. In the case of low currency, this information cannot be effectively used to support decision-making, although the decision-maker can probably learn from it. Our first objective is to develop an extended metric referring to currency, which provides an indication about the real-world information at the time of measurement, based on the stored information. Thus, the decision can be adjusted and the value of the stored information increased. Therefore, as a second objective, we propose a quantitative approach for modelling the influence of currency on decisionmaking by extending the normative concept of the value of information. Finally, we demonstrate the relevance of our approach by applying it to two real-world scenarios from the field of sales management in customer relationship management.
The concepts of both duality and fuzzy uncertainty in linear programming have been theoretically analyzed, comprehensively and practically applied in an abundance of cases. Consequently, their joint application is highly appealing for both scholars and practitioners. However, the literature contributions on duality in fuzzy linear programming (FLP) are neither complete nor consistent. For example, there are no consistent concepts of weak duality and strong duality. The contributions of this survey are (1) to provide the first comprehensive overview of literature results on duality in FLP, (2) to analyze these results in terms of research gaps in FLP duality theory, and (3) to show avenues for further research. We systematically analyze duality in fuzzy linear programming along potential fuzzifications of linear programs (fuzzy classes) and along fuzzy order operators. Our results show that research on FLP duality is fragmented along both dimensions, more specifically duality approaches and related results vary in terms of homogeneity, completeness, consistency with crisp duality, and complexity. Fuzzy linear programming is still far away from a unifying theory as we know it from crisp linear programming. We suggest fur- ther research directions, including the suggestion of comprehensive duality theories for specific fuzzy classes while dispensing with restrictive mathematical assumptions, the development of consistent duality theories for specific fuzzy order operators, and the proposition of a unifying fuzzy duality theory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.