2019
DOI: 10.1007/978-3-030-32239-7_27
|View full text |Cite
|
Sign up to set email alerts
|

Seeing Under the Cover: A Physics Guided Learning Approach for In-bed Pose Estimation

Abstract: Human in-bed pose estimation has huge practical values in medical and healthcare applications yet still mainly relies on expensive pressure mapping (PM) solutions. In this paper, we introduce our novel physics inspired visionbased approach that addresses the challenging issues associated with the in-bed pose estimation problem including monitoring a fully covered person in complete darkness. We reformulated this problem using our proposed Under the Cover Imaging via Thermal Diffusion (UCITD) method to capture … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
48
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(48 citation statements)
references
References 13 publications
0
48
0
Order By: Relevance
“…Scan range can be identified by detecting anatomical joints of the subject from the images. Much recent work [27][28][29] has focused on estimating the 2D [30][31][32][33][34][35] or 3D keypoint locations [28,[36][37][38][39] on the patient body. These keypoint locations usually include major joints such as the neck, shoulders, elbows, ankles, wrists, and knees.…”
Section: B Ai-empowered Imaging Workflowmentioning
confidence: 99%
“…Scan range can be identified by detecting anatomical joints of the subject from the images. Much recent work [27][28][29] has focused on estimating the 2D [30][31][32][33][34][35] or 3D keypoint locations [28,[36][37][38][39] on the patient body. These keypoint locations usually include major joints such as the neck, shoulders, elbows, ankles, wrists, and knees.…”
Section: B Ai-empowered Imaging Workflowmentioning
confidence: 99%
“…As a result of this focus, the areas of patient behaviour assessment have received less attention. While several in-clinic systems using CNN and RNN-based models have been introduced to enable comprehensive data analysis through accurate and granular quantification of a patient’s movements [ 231 , 232 , 233 ], these methods are not yet sufficiently accurate for widespread clinical use, yet we argue that graph neural networks have a great potential in these application areas.…”
Section: Research Challenges and Future Directionsmentioning
confidence: 99%
“…For all RGB and thermal experiments, we use the SLP [9] dataset, which comes with images of people lying on a bed and covered by a cloth under varying cover conditions: no cover (uncover), "light" cover (referred to as cover1), and "heavy" cover (cover2). For all RGB and depth experiments, we use the publicly available CAD [21] and PKU [22] datasets, along with a proprietary medical scan patient setup (SCAN) dataset.…”
Section: Implementation Detailsmentioning
confidence: 99%
“…Much recent algorithmic work [7]- [9] in patient positioning has focused on estimating the 2D or 3D keypoint locations on the patient body. Such keypoints represent only a very sparse sampling of the full body mesh in the 3D space that defines the digital human body.…”
mentioning
confidence: 99%