Abstract:Solving a non-trivial problem can rarely be reduced to performing a single, simple task. Within any complex knowledge-based system, there exists a number of tasks to be performed to solve the problem. Making these tasks explicit has been a recurrent concem over the past few years. This has led to functional architecture for knowledge-based systems. The purpose of this paper is to assess the use of functional architecture for knowledge-based systems. The discussion will be based on the experience gained while d… Show more
With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems.A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today's increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advicegiving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated.
With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems.A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today's increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advicegiving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated.
Ford Aerospace Corporation has been investigating the use of intelligent systems for space mission support since the early 1980s. Our research is motivated by the concept of independent, yet cooperating, intelligent systems operating in the survivable mobile ground stations of the future. Each intelligent system (IS) functions independently for localized situations and cooperates with other ISs to address situations of global system influence.This paper presents our research approach for implementing cooperating intelligent systems in a space systems environment. A satellite power management scenario is used to illustrate t h e approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.