The early diagnosis of cerebral palsy is an area which has recently seen significant multi-disciplinary research. Diagnostic tools such as the General Movements Assessment (GMA), have produced some very promising results. However, the prospect of automating these processes may improve accessibility of the assessment and also enhance the understanding of movement development of infants. Previous works have established the viability of using posebased features extracted from RGB video sequences to undertake classification of infant body movements based upon the GMA. In this paper, we propose a series of new and improved features, and a feature fusion pipeline for this classification task. We also introduce the RVI-38 dataset, a series of videos captured as part of routine clinical care. By utilising this challenging dataset we establish the robustness of several motion features for classification, subsequently informing the design of our proposed feature fusion framework based upon the GMA. We evaluate our proposed framework's classification performance using both the RVI-38 dataset and the publicly available MINI-RGBD dataset. We also implement several other methods from the literature for direct comparison using these two independent datasets. Our experimental results and feature analysis show that our proposed pose-based method performs well across both datasets. The proposed features afford us the opportunity to include finer detail than previous methods, and further model GMA specific body movements. These new features also allow us to take advantage of additional body-part specific information as a means of improving the overall classification performance, whilst retaining GMA relevant, interpretable, and shareable features.
In this paper, a system with six depth cameras was built to scan both feet simultaneously. An improved calibration method based on a T-shaped checkerboard was used to calculate the extrinsic parameters of the cameras. T-shaped virtual checkerboards were introduced to further fine-tune the accuracy of calibration based on the iterative closest point algorithm. Based on the proposed foot scanner, a complete procedure was introduced to measure the foot automatically by locating the anatomical landmarks without manual intervention. Various experiments were presented to validate the performance of the scanner and the measurements. The results verified that the proposed methods were efficient and versatile for three-dimensional foot scanning and measurement.
In this paper, a new method was proposed to establish the relationship between three-dimensional (3D) foot shapes and their two-dimensional (2D) foot silhouettes, through which a complete 3D foot shape can be predicted by simply inputting its two 2D silhouettes. 3D foot scans of 80 participants were randomly selected as the training set, and those of another 20 participants were used as the testing set. Elliptical Fourier analysis (EFA) and principle component analysis (PCA) were adopted to parameterize the 3D foot shapes. A linear regressive model was then developed to predict the 3D foot shape with the foot silhouettes. Experiment results indicated individual 3D foot shape can be predicted with a mean error between 1.21 and 1.27 mm, which can provide enough accuracy for the fit evaluation of footwear.
Purpose The automatic body measurement is the key of tailoring, mass customization and fit/ease evaluation. The major challenges include finding the landmarks and extracting the sizes accurately. The purpose of this paper is to propose a new method of body measurement based on the loop structure. Design/methodology/approach The scanned human model is sliced equally to layers consist of various shapes of loops. The semantic feature analysis has been regarded as a problem of finding the points of interest (POI) and the loop of interest (LOI) according to the types of loop connections. Methods for determining the basic landmarks have been detailed. Findings The experimental results validate that the proposed methods can be used to locate the landmarks and to extract sizes on markless human scans robustly and efficiently. Originality/value With the method, the body measurement can be quickly performed with average errors around 0.5 cm. The results of segmentation, landmarking and body measurements also validate the robustness and efficiency of the proposed methods.
In this article, we presented a new automatic three-dimensional-scanned garment fitting method for A-Pose-scanned human models. Both the garment and the human body were decomposed based on feature lines defined by various landmarks. The patches of the three-dimensional garment were automatically positioned around the human model by setting up the correspondence via feature matching. Virtual sewing was engaged to obtain the final results of virtual dressing. The penetration between cloth model and human model was solved by a geometrical method constrained by Laplacian-based deformation. The experimental results indicated that the proposed method was an efficient way for redressing various garments onto various human models while maintaining the original geometrical features of garments.
In 3D registration of point clouds, the goal is to find an optimal transformation that aligns the input shapes, provided that they have some overlap. Existing methods suffer from performance degradation when the overlapping ratio between the neighbouring point clouds is small. So far, there is no existing method that can be adopted for aligning shapes with no overlap. In this letter, to the best of knowledge, the first method for the registration of 3D shapes without overlap, assuming that the shapes correspond to partial views of a known semi-rigid 3D prior is presented. The method is validated and compared to existing methods on FAUST, which is a known dataset used for human body reconstruction. Experimental results show that this approach can effectively align shapes without overlap. Compared to existing state-of-theart methods, this approach avoids iterative optimization and is robust to outliers and inherent inaccuracies induced by an initial rough alignment of the shapes. Introduction: 3D registration is a classical and fundamental problem for countless applications. Since commodity depth cameras become less expensive and more accurate, depth images play an increasingly important role in numerous tasks [1]. In order to obtain comprehensive information from 3D scenery, point clouds captured from multiple views need to be aligned. The well-established method is iterative closest point (ICP) [2] based on which a myriad of flavours have been proposed. In ICP, given a source shape and a target shape, the following steps are performed: (1) for each point in the source shape, identify the closest corresponding point in the target shape; (2) predict the transformation by minimizing the mean square Euclidean distance between these correspondences; (3) transform the source shape using the predicted transformation from step 2; (4) iterate the above steps until the mean square distance reaches a pre-defined threshold. ICP and its variants are the dominating methods for the task of 3D registration. However, ICP-based methods assume that the source and target shapes have been roughly aligned with a sufficient overlap.. Deep learning has shown its excellent ability to solve various problems which are difficult or impossible to address using traditional approaches. Recent research strives to explore 3D registration via deep learning [3], [4], [5], [6]. However, these methods are designed for shapes that partially overlap. In this letter, we present a novel deep learning method for 3D shape registration. Compared to the existing methods, the main advantage of our method is that we successfully handled the non-overlapping shape registration problem. We assume that the shapes correspond to partial views of a known semi-rigid 3D prior. This problem is impossible to be addressed using ICP due to the lack of point correspondences. This is addressed in this letter of which the main contributions can be summarized as follows:
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.