2021
DOI: 10.1007/s11548-021-02476-0
|View full text |Cite
|
Sign up to set email alerts
|

Seeing under the cover with a 3D U-Net: point cloud-based weight estimation of covered patients

Abstract: Purpose Body weight is a crucial parameter for patient-specific treatments, particularly in the context of proper drug dosage. Contactless weight estimation from visual sensor data constitutes a promising approach to overcome challenges arising in emergency situations. Machine learning-based methods have recently been shown to perform accurate weight estimation from point cloud data. The proposed methods, however, are designed for controlled conditions in terms of visibility and position of the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 36 publications
0
5
0
Order By: Relevance
“…Other researchers also focus on the possible occlusion issue. Alexander et al [8] studied the weight estimation of covered patients. They implemented a 3D U-Net to construct the 3D points clouds of the subject under blankets, and then a 3D CNN was applied for the weight regression.…”
Section: A Body Weight Estimationmentioning
confidence: 99%
“…Other researchers also focus on the possible occlusion issue. Alexander et al [8] studied the weight estimation of covered patients. They implemented a 3D U-Net to construct the 3D points clouds of the subject under blankets, and then a 3D CNN was applied for the weight regression.…”
Section: A Body Weight Estimationmentioning
confidence: 99%
“…And a portion of approaches [8,56] utilize optical-flow or depth as their inputs which are least affected by domain shift compared with RGB. Bigalke et al [5] add human prior loss according to human anatomy. Kundu et al [29] define the uncertainty of predictions and control the uncertainty value while training.…”
Section: Unsupervised Domain Adaption Trainingmentioning
confidence: 99%
“…To further ensure the anatomical plausibility of the pose, we introduce a human prior loss L prior 2 adapted from Bigalke et al [5]. Specifically, we formulate three losses to penalize asymmetric limb lengths L symm , implausible joint angles L angle , and implausible bone lengths L length .…”
Section: Unsupervised Domain Adaptationmentioning
confidence: 99%
See 2 more Smart Citations