This article proposes a novel framework for the real-time capture, assessment, and visualization of ballet dance movements as performed by a student in an instructional, virtual reality (VR) setting. The acquisition of human movement data is facilitated by skeletal joint tracking captured using the popular Microsoft (MS) Kinect camera system, while instruction and performance evaluation are provided in the form of 3D visualizations and feedback through a CAVE virtual environment, in which the student is fully immersed. The proposed framework is based on the unsupervised parsing of ballet dance movement into a structured
posture space
using the spherical self-organizing map (SSOM). A unique feature descriptor is proposed to more appropriately reflect the subtleties of ballet dance movements, which are represented as
gesture trajectories
through posture space on the SSOM. This recognition subsystem is used to identify the category of movement the student is attempting when prompted (by a virtual instructor) to perform a particular dance sequence. The dance sequence is then segmented and cross-referenced against a library of gestural components performed by the teacher. This facilitates alignment and score-based assessment of individual movements within the context of the dance sequence. An immersive interface enables the student to review his or her performance from a number of vantage points, each providing a unique perspective and spatial context suggestive of how the student might make improvements in training. An evaluation of the recognition and virtual feedback systems is presented.
In this paper, an unsupervised learning network is explored to incorporate a self-learning capability into image retrieval systems. Our proposal is a new attempt to automate recursive content-based image retrieval. The adoption of a self-organizing tree map (SOTM) is introduced, to minimize the user participation in an effort to automate interactive retrieval. The automatic learning mode has been applied to optimize the relevance feedback (RF) method and the single radial basis function-based RF method. In addition, a semiautomatic version is proposed to support retrieval with different user subjectivities. Image similarity is evaluated by a nonlinear model, which performs discrimination based on local analysis. Experimental results show robust and accurate performance by the proposed method, as compared with conventional noninteractive content-based image retrieval (CBIR) systems and user controlled interactive systems, when applied to image retrieval in compressed and uncompressed image databases.
The proposed framework offers real-time analysis and visualization of ballet movements performed in a virtual reality environment. Students receive quantitative assessmentsdelivered using concurrent, localized visualizations-and a performance score based on incremental dynamic time warping.
Multilevel image segmentation is demonstrated as a rapid and accurate method of quantitative analysis for nanoparticle assembly in TEM images. The procedure incorporatingK-means clustering algorithm and watershed transform is tested on transmission electron microscope (TEM) images of FePt-based nanoparticles whose diameters are less than 5 nm. By solving the nanoparticle segmentation and separation problems, this unsupervised method is useful not only in the nonoverlapping case but also for agglomerated nanoparticles. Furthermore, the method exhibits scale invariance based on comparable results from images of different magnifications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.