Abstract:The increasing development of health social networks (HSN) in recent years has led the fast growing of user-generated contents. People tend to interact and share health information in social networks. However, the trustworthiness of information is the major concern in shared health information in social networks. Viral rumor-spreading that can happen quite rapidly in social media can cause damages to health consumers. In this paper, we survey existing solutions that patients can use to gauge the trustworthines… Show more
“…There are several applications of this framework, such as stock market prediction with Twitter data [37]. Other examples include trust management in social networks [54], cloud computing [34,35], internet of things [36,38], healthcare [8,9], emergency communications [12], and detection of crime [24] and fake users [22].…”
We propose, for the first time, a trustworthy acceptance metric and its measurement methodology to evaluate the trustworthiness of AI-based systems used in decision making in Food Energy Water (FEW) management. The proposed metric is a significant step forward in the standardization process of AI systems. It is essential to standardize the AI systems' trustworthiness, but until now, the standardization efforts remain at the level of high-level principles. The measurement methodology of the proposed includes human experts in the loop, and it is based on our trust management system. Our metric captures and quantifies the system's transparent
“…There are several applications of this framework, such as stock market prediction with Twitter data [37]. Other examples include trust management in social networks [54], cloud computing [34,35], internet of things [36,38], healthcare [8,9], emergency communications [12], and detection of crime [24] and fake users [22].…”
We propose, for the first time, a trustworthy acceptance metric and its measurement methodology to evaluate the trustworthiness of AI-based systems used in decision making in Food Energy Water (FEW) management. The proposed metric is a significant step forward in the standardization process of AI systems. It is essential to standardize the AI systems' trustworthiness, but until now, the standardization efforts remain at the level of high-level principles. The measurement methodology of the proposed includes human experts in the loop, and it is based on our trust management system. Our metric captures and quantifies the system's transparent
“…Some of the applications of this framework include stock market analysis using Twitter data, trust management of Internet of Things, fake user detection, and crime predictions. [16,27,23,28,29,30,11,40,41,8,39,7,25,24,6,17].…”
We propose a hybrid Human-Machine decision making to manage Food-Energy-Water resources. In our system trust among human actors during decision making is measured and managed. Furthermore, such trust is used to pressure human actors to chose among the solutions generated by algorithms that satisfy the community's preferred trade-offs among various objectives. We model the trust-based loops in decision making by using control theory. In this system, the feedback information is the trust pressure that actors receive from peers. Using control theory, we studied the dynamics of the trust of an actor. Then, we presented the modeling of the change of solution distances. In both scenarios, we also calculated the settling times and the stability using the transfer functions and their Z-transforms as the number of rounds to show whether and when the decision making is finalized.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.