Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2019
DOI: 10.1080/10447318.2019.1699748
|View full text |Cite
|
Sign up to set email alerts
|

Perceiving a Mind in a Chatbot: Effect of Mind Perception and Social Cues on Co-presence, Closeness, and Intention to Use

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
45
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 108 publications
(54 citation statements)
references
References 45 publications
2
45
0
Order By: Relevance
“…In that case, they may lose face-to-face cues and personal interactions with physicians and find themselves in a more passive position for making health-related decisions. This finding is consistent with a study in the chatbot context (within the area of AI systems), which indicated that users have stronger feelings of copresence and closeness when the chatbot uses social cues [ 90 ]. In the context of robot care, a study showed that when robots are used in rehabilitation, they are viewed by patients as reducing human contact [ 91 ].…”
Section: Discussionsupporting
confidence: 91%
“…In that case, they may lose face-to-face cues and personal interactions with physicians and find themselves in a more passive position for making health-related decisions. This finding is consistent with a study in the chatbot context (within the area of AI systems), which indicated that users have stronger feelings of copresence and closeness when the chatbot uses social cues [ 90 ]. In the context of robot care, a study showed that when robots are used in rehabilitation, they are viewed by patients as reducing human contact [ 91 ].…”
Section: Discussionsupporting
confidence: 91%
“…Therefore, they may lose face-to-face cues and personal interactions with physicians and find themselves in a more passive position for making health-related decisions. This finding is consistent with a study in the chatbot context (within the area of AI systems), which indicates that users have stronger feelings of co-presence and closeness when the chatbot uses social cues [77]. In the context of robot care, a study shows that when robots used in rehabilitation, they are viewed by patients as reducing human contact [78].…”
Section: Discussionsupporting
confidence: 86%
“…For instance, in the context of autonomous vehicles, human characteristics refer to the capacity for rational thought and conscious feeling [21]. In the context of chatbots, Lee et al [45] viewed human characteristics as users' mental states (e.g., intention and consciousness). In the context of social robots, humanlike characteristics refer to human appearance, which includes psychological (e.g., emotions, personalities, and gestures) and nonpsychological features (e.g., head, eyes, arms, and legs) [22].…”
Section: What Is Anthropomorphism?mentioning
confidence: 99%
“…[26] "Humanlike features (e.g., humanlike appearance, emotions, personalities, and behaviors) of a product" [3,14,22] "The extent to which service robots simulate the characteristics, behaviors or appearances of humans" [27] An inference "Individuals' inferences that a chatbot's mental states are similar to those of a human" [45] "Inductive inference in which the perceiver attributes humanlike characteristics, motivations, intentions or underlying mental states to a non-human entity" [11] Other "The assignment of human traits and characteristics to conversation agents"…”
Section: Categorymentioning
confidence: 99%
See 1 more Smart Citation