2020
DOI: 10.48550/arxiv.2008.08735
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Simultaneously-Collected Multimodal Lying Pose Dataset: Towards In-Bed Human Pose Monitoring under Adverse Vision Conditions

Abstract: Computer vision (CV) has achieved great success in interpreting semantic meanings from images, yet CV algorithms can be brittle for tasks with adverse vision conditions and the ones suffering from data/label pair limitation. One of this tasks is in-bed human pose estimation, which has significant values in many healthcare applications. In-bed pose monitoring in natural settings could involve complete darkness or full occlusion. Furthermore, the lack of publicly available in-bed pose datasets hinders the use of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
60
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(61 citation statements)
references
References 57 publications
(113 reference statements)
1
60
0
Order By: Relevance
“…• Evaluating PEye performance on the largest-ever in-bed multimodal human pose dataset with >100 human subjects with nearly 15,000 pose images called Simultaneously-collected multimodal Lying Pose (SLP), including RGB, LWIR, and contact pressure map, which is publicly available in our webpage Liu et al (2020).…”
Section: Our Contributionsmentioning
confidence: 99%
See 1 more Smart Citation
“…• Evaluating PEye performance on the largest-ever in-bed multimodal human pose dataset with >100 human subjects with nearly 15,000 pose images called Simultaneously-collected multimodal Lying Pose (SLP), including RGB, LWIR, and contact pressure map, which is publicly available in our webpage Liu et al (2020).…”
Section: Our Contributionsmentioning
confidence: 99%
“…To evaluate PEye approach effectiveness in generating pressure data from vision signals, in our previous work Liu et al (2020), we formed and publicly released the Simultaneously-collected multimodal Lying Pose (SLP) dataset, where RGB, LWIR and PM signals are collected using a Logitech webcam, a FLIR IR camera, and a Tekscan pressure sensing mat, respectively. We collected data from 102 subjects that were instructed to lie down on a twin-size bed and take random poses in natural ways.…”
Section: Multimodal Dataset Collectionmentioning
confidence: 99%
“…To address the challenge of heavy occlusion when inferring human pose at rest, prior work has required multiple modalities as input, including RGB, depth, thermal, and pressure imagery [2], [3]. Our deep network, BodyPres-sureWnet, employs only a depth camera, but when trained on enough data-over 100, 000 samples-it substantially outperforms prior work.…”
Section: Introductionmentioning
confidence: 99%
“…Deep models for human pose estimation are highly sensitive to the pose distribution in the training data. To generate the synthetic bodies at rest, we initialize the simulator with poses that are close to real poses in the Simultaneouslycollected Lying Pose (SLP) dataset [2]. As the SLP dataset only has 2D pose annotations, we present an annotation method to fit the 3D Skinned Multi-Person Linear (SMPL) body model [7] to the real data.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation