State-of-the-art human pose estimation methods are based on heat map representation. In spite of the good performance, the representation has a few issues in nature, such as non-differentiable postprocessing and quantization error. This work shows that a simple integral operation relates and unifies the heat map representation and joint regression, thus avoiding the above issues. It is differentiable, efficient, and compatible with any heat map based methods. Its effectiveness is convincingly validated via comprehensive ablation experiments under various settings, specifically on 3D pose estimation, for the first time.
In this paper, we study the task of 3D human pose estimation in the wild. This task is challenging due to lack of training data, as existing datasets are either in the wild images with 2D pose or in the lab images with 3D pose.We propose a weakly-supervised transfer learning method that uses mixed 2D and 3D labels in a unified deep neutral network that presents two-stage cascaded structure. Our network augments a state-of-the-art 2D pose estimation sub-network with a 3D depth regression sub-network. Unlike previous two stage approaches that train the two sub-networks sequentially and separately, our training is end-to-end and fully exploits the correlation between the 2D pose and depth estimation sub-tasks. The deep features are better learnt through shared representations. In doing so, the 3D pose labels in controlled lab environments are transferred to in the wild images. In addition, we introduce a 3D geometric constraint to regularize the 3D pose prediction, which is effective in the absence of ground truth depth labels. Our method achieves competitive results on both 2D and 3D benchmarks.
We extends the previous 2D cascaded object pose regression work [9] in two aspects so that it works better for 3D articulated objects. Our first contribution is 3D poseindexed features that generalize the previous 2D parameterized features and achieve better invariance to 3D transformations. Our second contribution is a principled hierarchical regression that is adapted to the articulated object structure. It is therefore more accurate and faster. Comprehensive experiments verify the state-of-the-art accuracy and efficiency of the proposed approach on the challenging 3D hand pose estimation problem, on a public dataset and our new dataset.
No abstract
Regression based methods are not performing as well as detection based methods for human pose estimation. A central problem is that the structural information in the pose is not well exploited in the previous regression methods. In this work, we propose a structure-aware regression approach. It adopts a reparameterized pose representation using bones instead of joints. It exploits the joint connection structure to define a compositional loss function that encodes the long range interactions in the pose. It is simple, effective, and general for both 2D and 3D pose estimation in a unified setting. Comprehensive evaluation validates the effectiveness of our approach. It significantly advances the state-of-the-art on Human3.6M [20] and is competitive with state-of-the-art results on MPII [3].
Learning articulated object pose is inherently difficult because the pose is high dimensional but has many structural constraints. Most existing work do not model such constraints and does not guarantee the geometric validity of their pose estimation, therefore requiring a post-processing to recover the correct geometry if desired, which is cumbersome and sub-optimal. In this work, we propose to directly embed a kinematic object model into the deep neutral network learning for general articulated object pose estimation. The kinematic function is defined on the appropriately parameterized object motion variables. It is differentiable and can be used in the gradient descent based optimization in network training. The prior knowledge on the object geometric model is fully exploited and the structure is guaranteed to be valid. We show convincing experiment results on a toy example and the 3D human pose estimation problem. For the latter we achieve state-of-the-art result on Human3.6M dataset.
OBJECTIVES: Prophylactic vaccination of youngwomen aged 16 to 26 years with the 9-valent (6/11/16/18/31/33/45/52/58) human papillomavirus (HPV) virus-like particle (9vHPV) vaccine prevents infection and disease. We conducted a noninferiority immunogenicity study to bridge the findings in young women to girls and boys aged 9 to 15 years.METHODS: Subjects (N = 3066) received a 3-dose regimen of 9vHPV vaccine administered at day 1, month 2, and month 6. Anti-HPV serologic assays were performed at day 1 and month 7. Noninferiority required that the lower bound of 2-sided 95% confidence intervals of geometric mean titer ratios (boys:young women or girls:young women) be .0.67 for each HPV type. Systemic and injection-site adverse experiences (AEs) and serious AEs were monitored.RESULTS: At 4 weeks after dose 3, .99% of girls, boys, and young women seroconverted for each vaccine HPV type. Increases in geometric mean titers to HPV types 6/11/16/18/ 31/33/45/52/58 were elicited in all vaccine groups. Responses in girls and boys were noninferior to those of young women. Persistence of anti-HPV responses was demonstrated through 2.5 years after dose 3. Administration of the 9vHPV vaccine was generally well tolerated. A lower proportion of girls (81.9%) and boys (72.8%) than young women (85.4%) reported injection-site AEs, most of which were mild to moderate in intensity.CONCLUSIONS: These data support bridging the efficacy findings with 9vHPV vaccine in young women 16 to 26 years of age to girls and boys 9 to 15 years of age and implementing genderneutral HPV vaccination programs in preadolescents and adolescents.WHAT'S KNOWN ON THIS SUBJECT: Prophylactic vaccination of young women 16 to 26 years of age with the 9-valent human papillomavirus (HPV)-like particle (9vHPV) vaccine prevents infection and disease with vaccine HPV types.WHAT THIS STUDY ADDS: These data support bridging the efficacy findings with 9vHPV vaccine in young women 16 to 26 years of age to girls and boys 9 to 15 years of age and implementation of gender-neutral HPV vaccination programs in preadolescents and adolescents.
The plethora of biomedical relations which are embedded in medical logs (records) demands researchers' attention. Previous theoretical and practical focuses were restricted on traditional machine learning techniques. However, these methods are susceptible to the issues of “vocabulary gap” and data sparseness and the unattainable automation process in feature extraction. To address aforementioned issues, in this work, we propose a multichannel convolutional neural network (MCCNN) for automated biomedical relation extraction. The proposed model has the following two contributions: (1) it enables the fusion of multiple (e.g., five) versions in word embeddings; (2) the need for manual feature engineering can be obviated by automated feature learning with convolutional neural network (CNN). We evaluated our model on two biomedical relation extraction tasks: drug-drug interaction (DDI) extraction and protein-protein interaction (PPI) extraction. For DDI task, our system achieved an overall f-score of 70.2% compared to the standard linear SVM based system (e.g., 67.0%) on DDIExtraction 2013 challenge dataset. And for PPI task, we evaluated our system on Aimed and BioInfer PPI corpus; our system exceeded the state-of-art ensemble SVM system by 2.7% and 5.6% on f-scores.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.