The concept of Artificial Intelligence has gained a lot of attention over the last decade. In particular, AI-based tools have been employed in several scenarios and are, by now, pervading our everyday life. Nonetheless, most of these systems lack many capabilities that we would naturally consider to be included in a notion of "intelligence". In this work, we present an architecture that, inspired by the cognitive theory known as Thinking Fast and Slow by D. Kahneman, is tasked with solving planning problems in different settings, specifically: classical and multi-agent epistemic. The system proposed is an instance of a more general AI paradigm, referred to as SOFAI (for Slow and Fast AI). SOFAI exploits multiple solving approaches, with different capabilities that characterize them as either fast or slow, and a metacognitive module to regulate them. This combination of components, which roughly reflects the human reasoning process according to D. Kahneman, allowed us to enhance the reasoning process that, in this case, is concerned with planning in two different settings. The behavior of this system is then compared to state-of-the-art solvers, showing that the newly introduced system presents better results in terms of generality, solving a wider set of problems with an acceptable trade-off between solving times and solution accuracy.
Why do people share or publicly engage with fake stories? Two possible answers come to mind: (a) people are deeply irrational and believe these stories to be true; or (b) they intend to deceive their audience. Both answers presuppose the idea that people put the stories forward as true. But I argue that in some cases, these outlandish (yet also very popular) stories function as signals of one's group membership. This signaling function can make better sense of why, despite their unusual nature or lack of a factual basis, some of these stories are so widespread.
In this paper, I will focus on a type of confabulation that emerges in relation to questions about mental attitudes (e.g. belief, emotion, decision) whose causes we cannot introspectively access. I argue against two popular views that see confabulations as mainly offering a psychological story about ourselves. On these views, confabulations are the result of either a cause-tracking mechanism or a self-directed mindreading mechanism. In contrast, I propose the view that confabulations are mostly telling a normative story: they are arguments primarily offered to justify one's attitudes, and they are produced by our argumentative reasoning mechanism driven by the biological goal of presenting ourselves as good reasoners and as reliable sources of information.
Nudging is a behavioral strategy aimed at influencing people's thoughts and actions. Nudging techniques can be found in many situations in our daily lives, and these nudging techniques can targeted at human fast and unconscious thinking, e.g., by using images to generate fear or the more careful and effortful slow thinking, e.g., by releasing information that makes us reflect on our choices. In this paper, we propose and discuss a value-based AIhuman collaborative framework where AI systems nudge humans by proposing decision recommendations. Three different nudging modalities, based on when recommendations are presented to the human, are intended to stimulate human fast thinking, slow thinking, or meta-cognition. Values that are relevant to a specific decision scenario are used to decide when and how to use each of these nudging modalities. Examples of values are decision quality, speed, human upskilling and learning, human agency, and privacy. Several values can be present at the same time, and their priorities can vary over time. The framework treats values as parameters to be instantiated in a specific decision environment.
Many of our beliefs behave irrationally: this is hardly news to anyone. Although beliefs' irrational tendencies need to be taken into account, this paper argues that beliefs necessarily preserve at least a minimal level of rationality. This view offers a plausible picture of what makes belief unique and will help us to set beliefs apart from other cognitive attitudes (e.g., imagination, acceptance).In philosophy and cognitive science, mental attitude types (e.g., imagination, belief, and desire) are often defined in terms of their input and output (Fodor, 1985;Nichols and Stich, 2003). When applied to belief, it has long been argued that -on the output side -belief is inferentially promiscuous and action-guiding. On the input side, evidence is usually that which motivates one to form and revise one's beliefs. This means that for an attitude to count as a belief it must be prone to react to the evidence establishing the truth of its content, as well as to any counter-evidence disproving it. These core tendencies are also usually linked to belief's standards of doxastic rationality. Piggybacking on this mainstream view, which I call Traditionalism, many philosophers maintain that beliefs are in fact expected to behave rationally. This is Strong Traditionalism. Contrary to the strong version of Traditionalism, however, a group of Revisionist philosophers have recently argued that it is not necessary for a belief to have a significant and widespread impact on our theoretical and practical reasoning, and/or to appropriately respond to the relevant evidence. On a popular version 4 4 Strong Traditionalism: mental attitude A is a belief only if A is mostly doxastically rational, in the sense that it is (i) sensitive to the relevant evidence, (ii) inferentially integrated with other beliefs and intentional attitudes, and it (iii) causes actions (when coupled with the appropriate conative attitude, in the right circumstances).2The term 'rational' is used for any belief conforming to standards of rationality, whereas 'irrational' beliefs subvert those standards. Roughly, a belief is epistemically irrational if held in the face of insufficient, available evidence.3 Also, a belief is procedurally irrational if it does not produce inferential effects to preserve coherence with other attitudes.4 Finally, a belief is practically irrational if it does not produce behavioral/emotional effects, and does not enjoy the role of guiding our practices of practical reasoning.5
No abstract
Epistemic Planning (EP) refers to an automated planning setting where the agent reasons in the space of knowledge states and tries to find a plan to reach a desirable state from the current state. Its general form, the Multi-agent Epistemic Planning (MEP) problem involves multiple agents who need to reason about both the state of the world and the information flow between agents. In a MEP problem, multiple approaches have been developed recently with varying restrictions, such as considering only the concept of knowledge while not allowing the idea of belief, or not allowing for "complex" modal operators such as those needed to handle dynamic common knowledge. While the diversity of approaches has led to a deeper understanding of the problem space, the lack of a standardized way to specify MEP problems independently of solution approaches has created difficulties in comparing performance of planners, identifying promising techniques, exploring new strategies like ensemble methods, and making it easy for new researchers to contribute to this research area. To address the situation, we propose a unified way of specifying EP problems -the Epistemic Planning Domain Definition Language, E-PDDL. We show that E-PPDL can be supported by leading MEP planners and provide corresponding parser code that translates EP problems specified in E-PDDL into (M)EP problems that can be handled by several planners. This work is also useful in building more general epistemic planning environments where we envision a meta-cognitive module that takes a planning problem in E-PDDL, identifies and assesses some of its features, and autonomously decides which planner is the best one to solve it. MotivationMulti-agent scenarios are ubiquitous in everyday life. We often need to make decisions based on information on the problem to be solved and on our knowledge, or belief, about the preferences and actions of other agents involved in, or impacted by, the decision. It is therefore an epistemic reasoning task to make decisions in this scenarios.Often such decision involve creating a plan, that is, a sequence of actions that, when executed, will lead to a desired
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.