2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00306
|View full text |Cite
|
Sign up to set email alerts
|

HUMBI: A Large Multiview Dataset of Human Body Expressions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
43
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 75 publications
(47 citation statements)
references
References 63 publications
0
43
0
Order By: Relevance
“…A person's emotional state is often conveyed through bodily expression. As such, analyzing body based activities, including action, gesture and posture are the popular research topics [75,36,23,62,67,71,24,53,93] in the community. However, these datasets focused on recognizing human activities (e.g., a man is jumping), rarely related to the emotional states.…”
Section: Related Workmentioning
confidence: 99%
“…A person's emotional state is often conveyed through bodily expression. As such, analyzing body based activities, including action, gesture and posture are the popular research topics [75,36,23,62,67,71,24,53,93] in the community. However, these datasets focused on recognizing human activities (e.g., a man is jumping), rarely related to the emotional states.…”
Section: Related Workmentioning
confidence: 99%
“…Datasets for 3D human motion and interactions. A large number of datasets focus on 3D human pose and motion from third-person views [16,23,27,30,31,43,52,57,71,74,75,84,90]. For example, Human3.6M [27] and AMASS [51] use optical marker-based motion capture to collect large amounts of high-quality 3D motion sequences; they are limited to constrained studio setups and images -when available -are polluted by marker data.…”
Section: Related Workmentioning
confidence: 99%
“…You2Me [55] similarly focuses on egocentric body pose prediction, annotating 3D ground-truth skeletons from image sequences captured with a chest-mounted camera plus external cameras. EgoMoCap [47] analyzes the second- [84] 380k 772 * TotalCapture [71] 1,900k 5 Human3.6M [27] 3,600k 11 Mo2Cap2 [80] 15k 5 You2Me [55] 150k 10 HPS [22] 300k 7…”
Section: Related Workmentioning
confidence: 99%
“…Alternatively, several methods use marker-less motion capture, e.g. MuPoTS-3D [37], PanopticStudio [26], MPI-INF-3DHP-Test [36], and HUMBI [56]. Such methods are typically less accurate than marker-based systems, but they avoid intrusive markers, allow more varied clothing, and sometimes are used in more realistic scenes e.g.…”
Section: Related Workmentioning
confidence: 99%