2016 IEEE International Conference on Robotics and Automation (ICRA) 2016
DOI: 10.1109/icra.2016.7487304
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning for human part discovery in images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
55
0
4

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 75 publications
(59 citation statements)
references
References 18 publications
0
55
0
4
Order By: Relevance
“…See Figure 5 for sample test images and corresponding ground truth (GT) annotation. We use the same train/test split as [25], 2 subjects for training and 4 subjects for test. The amount of data is limited for training deep networks.…”
Section: Segmentation On Freiburg Sitting Peoplementioning
confidence: 99%
See 1 more Smart Citation
“…See Figure 5 for sample test images and corresponding ground truth (GT) annotation. We use the same train/test split as [25], 2 subjects for training and 4 subjects for test. The amount of data is limited for training deep networks.…”
Section: Segmentation On Freiburg Sitting Peoplementioning
confidence: 99%
“…In [25], the authors introduce a network that outputs a high-resolution segmentation after several layers of upcon-volutions. For a fair comparison, we modify our network to output full resolution by adding one bilinear upsampling layer followed by nonlinearity (ReLU) and a convolutional layer with 3 × 3 filters that outputs 15 × 300 × 300 instead of 15 × 64 × 64 as explained in Section 4.…”
Section: Segmentation On Freiburg Sitting Peoplementioning
confidence: 99%
“…[28][29][30] Also, there is the more challenging task of simultaneous annotation of multiple people [17,31]. In addition, there is work like that of Oliveira et al [32] that performs human part segmentation based on fully convolutional networks [23]. Our work focuses solely on the task of keypoint localization of a single person's pose from an RGB image.…”
Section: Related Workmentioning
confidence: 99%
“…test the performance of our hybrid tracker against the method in [3] on our augmented test set to examine the how much our Fast-FCN based filtering improves overall tracking accuracy. We then compare the Fast-FCN architecture against the larger U-Net [23] and VGG-FCN [25] models and show that our simpler model performs almost as well as the U-Net and better than VGG-FCN in this context while being much faster. We also report raw segmentation performance on this dataset for all three models.…”
Section: Methodsmentioning
confidence: 99%
“…In [26] [24] [25] [34], researchers developed FCN architectures by adding deconvolve layers based on the VGG-16 [35] classification network. Thus, we chose the VGG-FCN structure proposed in [25] for body part labeling as our second comparison. We rebuilt the structure of these networks, and only modified the sizes to fit our dataset.…”
Section: Architecture Comparisonmentioning
confidence: 99%