Conversation is a fundamental human experience that is necessary to pursue intrapersonal and interpersonal goals across myriad contexts, relationships, and modes of communication. In the current research, we isolate the role of an understudied conversational behavior: question-asking. Across 3 studies of live dyadic conversations, we identify a robust and consistent relationship between question-asking and liking: people who ask more questions, particularly follow-up questions, are better liked by their conversation partners. When people are instructed to ask more questions, they are perceived as higher in responsiveness, an interpersonal construct that captures listening, understanding, validation, and care. We measure responsiveness with an attitudinal measure from previous research as well as a novel behavioral measure: the number of follow-up questions one asks. In both cases, responsiveness explains the effect of question-asking on liking. In addition to analyzing live get-to-know-you conversations online, we also studied face-to-face speed-dating conversations. We trained a natural language processing algorithm as a "follow-up question detector" that we applied to our speed-dating data (and can be applied to any text data to more deeply understand question-asking dynamics). The follow-up question rate established by the algorithm showed that speed daters who ask more follow-up questions during their dates are more likely to elicit agreement for second dates from their partners, a behavioral indicator of liking. We also find that, despite the persistent and beneficial effects of asking questions, people do not anticipate that question-asking increases interpersonal liking. (PsycINFO Database Record
The “veil of ignorance” is a moral reasoning device designed to promote impartial decision making by denying decision makers access to potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here, we apply veil-of-ignorance reasoning in a more focused way to specific moral dilemmas, all of which involve a tension between the greater good and competing moral concerns. Across 7 experiments (n = 6,261), 4 preregistered, we find that veil-of-ignorance reasoning favors the greater good. Participants first engaged in veil-of-ignorance reasoning about a specific dilemma, asking themselves what they would want if they did not know who among those affected they would be. Participants then responded to a more conventional version of the same dilemma with a moral judgment, a policy preference, or an economic choice. Participants who first engaged in veil-of-ignorance reasoning subsequently made more utilitarian choices in response to a classic philosophical dilemma, a medical dilemma, a real donation decision between a more vs. less effective charity, and a policy decision concerning the social dilemma of autonomous vehicles. These effects depend on the impartial thinking induced by veil-of-ignorance reasoning and cannot be explained by anchoring, probabilistic reasoning, or generic perspective taking. These studies indicate that veil-of-ignorance reasoning may be a useful tool for decision makers who wish to make more impartial and/or socially beneficial choices.
Humans have a remarkable capacity for flexible decision-making, deliberating among actions by modeling their likely outcomes. This capacity allows us to adapt to the specific features of diverse circumstances. In real-world decision-making, however, people face an important challenge: There are often an enormous number of possibilities to choose among, far too many for exhaustive consideration. There is a crucial, understudied prechoice step in which, among myriad possibilities, a few good candidates come quickly to mind. How do people accomplish this? We show across nine experiments ( N = 3,972 U.S. residents) that people use computationally frugal cached value estimates to propose a few candidate actions on the basis of their success in past contexts (even when irrelevant for the current context). Deliberative planning is then deployed just within this set, allowing people to compute more accurate values on the basis of context-specific criteria. This hybrid architecture illuminates how typically valuable thoughts come quickly to mind during decision-making.
Recent concern about harms of information technologies motivate consideration of regulatory action to forestall or constrain certain developments in the field of artificial intelligence (AI). However, definitional ambiguity hampers the possibility of conversation about this urgent topic of public concern. Legal and regulatory interventions require agreed-upon definitions, but consensus around a definition of AI has been elusive, especially in policy conversations. With an eye towards practical working definitions and a broader understanding of positions on these issues, we survey experts and review published policy documents to examine researcher and policy-maker conceptions of AI. We find that while AI researchers favor definitions of AI that emphasize technical functionality, policy-makers instead use definitions that compare systems to human thinking and behavior. We point out that definitions adhering closely to the functionality of AI systems are more inclusive of technologies in use today, whereas definitions that emphasize human-like capabilities are most applicable to hypothetical future technologies. As a result of this gap, ethical and regulatory efforts may overemphasize concern about future technologies at the expense of pressing issues with existing deployed technologies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.