Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems 2023
DOI: 10.1145/3544549.3585886
|View full text |Cite
|
Sign up to set email alerts
|

A User Interface for Sense-making of the Reasoning Process while Interacting with Robots

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…Existing interfaces for HRI systems primarily focus on situation awareness and user control [73,76]. In these contexts, the interfaces focus on how to assist the human-robot interaction and data analysis process and fail to consider how to improve the human experience in the labeling process and the data quality.…”
Section: Visualization In Human-robot Interactionmentioning
confidence: 99%
“…Existing interfaces for HRI systems primarily focus on situation awareness and user control [73,76]. In these contexts, the interfaces focus on how to assist the human-robot interaction and data analysis process and fail to consider how to improve the human experience in the labeling process and the data quality.…”
Section: Visualization In Human-robot Interactionmentioning
confidence: 99%
“…Several works have investigated the design of transparent interfaces in HRI. Wang et al (2023) developed an interface that visualizes the decision-making process of a robotic system and allows for inspection and editing of the knowledge graph. They argue that such an interface can support the sensemaking of robot decision-making.…”
Section: Related Workmentioning
confidence: 99%
“…Visualization options that were considered include an abstracted visualization with icons and text, a knowledge graph representation that closely matches the robot’s internal representation (as in, e.g., Wang et al, 2023 ), or a camera stream with a visual overlay (e.g., Perlmutter et al, 2016 ). In our task scenario, a camera stream representation would require either a static composition of multiple images from the video stream (which would result in a cluttered view that does not fit on the tablet in a way that individual objects can be distinguished); or it would need to change dynamically (which we expected would make it more difficult to keep track of the knowledge base).…”
Section: Designing a System That Conveys Detected Objects To A Usermentioning
confidence: 99%