2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2016
DOI: 10.1109/iros.2016.7759647
|View full text |Cite
|
Sign up to set email alerts
|

Iterative path optimisation for personalised dressing assistance using vision and force information

Abstract: Abstract-We propose an online iterative path optimisation method to enable a Baxter humanoid robot to assist human users to dress. The robot searches for the optimal personalised dressing path using vision and force sensor information: vision information is used to recognise the human pose and model the movement space of upper-body joints; force sensor information is used for the robot to detect external force resistance and to locally adjust its motion. We propose a new stochastic path optimisation method bas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
104
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
2
1

Relationship

3
6

Authors

Journals

citations
Cited by 74 publications
(106 citation statements)
references
References 18 publications
1
104
0
Order By: Relevance
“…To compensate for the disadvantages of using vision information only, force information has been successfully used in [1] to infer if the user's forearm can successfully enter the sleeve. In [9] and our previous research [7], the force resistance between the robot gripper and the body has been used to adjust the robot motion with a pure position controller. Although these methods can update the robot motion based on the detected force, this is not achieved in real-time and the delay induced by robot position update may let the robot in an uncomfortable or even dangerous state for the user.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…To compensate for the disadvantages of using vision information only, force information has been successfully used in [1] to infer if the user's forearm can successfully enter the sleeve. In [9] and our previous research [7], the force resistance between the robot gripper and the body has been used to adjust the robot motion with a pure position controller. Although these methods can update the robot motion based on the detected force, this is not achieved in real-time and the delay induced by robot position update may let the robot in an uncomfortable or even dangerous state for the user.…”
Section: Related Workmentioning
confidence: 99%
“…A common approach to solve this problem is to recognize the postures of the user in real time and to adjust the robot's trajectory accordingly [2][3][4][5][6]. However, severe occlusions occur when the robot arms, the clothes and the human body are in close contact [7], which lead to real-time human pose recognition failures during the dressing process. For instance, depth cameras, which are often used for human pose recognition, typically fail to extract the user's skeleton because of the occlusions.…”
Section: Introductionmentioning
confidence: 99%
“…Recent studies included additional modalities such as haptics to improve the interaction with the user [18], [19], [20], [21], [22]. The evaluation of such systems focused on robot performance without considering the direct user input for robot personalization, hence limiting the scalability of such systems in applications with people.…”
Section: A Relevant Workmentioning
confidence: 99%
“…Gao et al (2016) have used RGB-D data to estimate user pose and initial robot trajectory, which was then updated based on force feedback information for dressing a sleeveless jacket using the Baxter robot. Another group have made predictive models for dressing a hospital gown based on just the force modality, removing the vision input altogether, in a simple single arm experiment (Kapusta et al, 2016).…”
Section: Related Workmentioning
confidence: 99%