This paper explores several approaches for articulatedpose estimation, assuming that video-rate depth information is available, from either stereo cameras or other sensors. We use these depth measurements in the traditional linear brightness constraint equation, as well as in a depth constraint equation. To capture the joint constraints, we combine the brightness and depth constraints with twist mathematics. We address several important issues in the formation of the constraint equations, including updating the body rotation matrix without using a first-order matrix approximation and removing the coupling between the rotation and translation updates. The resulting constraint equations are linear on a modified parameter set. After solving these linear constraints, there is a single closedform non-linear transformation to return the updates to the original pose parameters. We show results for tracking body pose in oblique views of synthetic walking sequences and in moving-camera views of synthetic jumping-jack sequences. We also show results for tracking body pose in side views of a real walking sequence.
Accurate vessel segmentation in retinal images plays a vital role for retinopathy diagnosis and analysis. The presence of very thin vessels in low image contrast, on the other hand makes the segmentation task difficult. In the proposed method retinal vessels are segmented using multiscale Fully Convolved Convolutional Neural Network (FCCNN) architecture. The proposed architecture is trained for pixel classification to cope with the varying width and direction of the vessel structure in the retina. Green channel extraction gives better contrast difference between vessels and background. The skeletonization process is done which prevents the change in structure, thus the vasculature remains unchanged. In addition, an improved class balanced cross entropy loss function is included to avoid misclassification and imbalance problem. The proposed method is verified on public retinal vessel segmentation database (DRIVE). The accuracy of DRIVE FCCNN after 90 epochs was attained as 92.73% and the loss was attained as 0.0632. The experimental results show that segmentation results and accuracy obtained for FCNN are greater compared to the other architecture.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.