BackgroundOver the past few years, the world has witnessed an unprecedented growth in smartphone use. With sensors such as accelerometers and gyroscopes on board, smartphones have the potential to enhance our understanding of health behavior, in particular physical activity or the lack thereof. However, reliable and valid activity measurement using only a smartphone in situ has not been realized.ObjectiveTo examine the validity of the iPod Touch (Apple, Inc.) and particularly to understand the value of using gyroscopes for classifying types of physical activity, with the goal of creating a measurement and feedback system that easily integrates into individuals’ daily living.MethodsWe collected accelerometer and gyroscope data for 16 participants on 13 activities with an iPod Touch, a device that has essentially the same sensors and computing platform as an iPhone. The 13 activities were sitting, walking, jogging, and going upstairs and downstairs at different paces. We extracted time and frequency features, including mean and variance of acceleration and gyroscope on each axis, vector magnitude of acceleration, and fast Fourier transform magnitude for each axis of acceleration. Different classifiers were compared using the Waikato Environment for Knowledge Analysis (WEKA) toolkit, including C4.5 (J48) decision tree, multilayer perception, naive Bayes, logistic, k-nearest neighbor (kNN), and meta-algorithms such as boosting and bagging. The 10-fold cross-validation protocol was used.ResultsOverall, the kNN classifier achieved the best accuracies: 52.3%–79.4% for up and down stair walking, 91.7% for jogging, 90.1%–94.1% for walking on a level ground, and 100% for sitting. A 2-second sliding window size with a 1-second overlap worked the best. Adding gyroscope measurements proved to be more beneficial than relying solely on accelerometer readings for all activities (with improvement ranging from 3.1% to 13.4%).ConclusionsCommon categories of physical activity and sedentary behavior (walking, jogging, and sitting) can be recognized with high accuracies using both the accelerometer and gyroscope onboard the iPod touch or iPhone. This suggests the potential of developing just-in-time classification and feedback tools on smartphones.
Three-dimensional tele-immersive (3DTI) environments have great potential to promote collaborative work among geographically distributed users. However, most existing 3DTI systems work with only two sites due to the huge demand of resources and the lack of a simple yet powerful networking model to handle connectivity, scalability, and quality-of-service (QoS) guarantees. In this paper, we explore the design space from the angle of multi-stream management to enable multi-party 3DTI communication. Multiple correlated 3D video streams are employed to provide a comprehensive representation of the physical scene in each 3DTI environment, and rendered together to establish a common cyberspace among all participating 3DTI environments. The existence of multi-stream correlation provides the unique opportunity for new approaches in QoS provisioning. Previous work mostly concentrated on compression and adaptation techniques on the per stream basis while ignoring the application layer semantics and the coordination required among streams. We propose an innovative and generalized ViewCast model to coordinate the multi-stream content dissemination over an overlay network. ViewCast leverages view semantics in 3D free-viewpoint video systems to fill the gap between the high-level user interest and the low-level stream management. In ViewCast, only the view information is specified by the user/application, while the underlying control dynamically performs stream differentiation, selection, coordination and dissemination. We present the details of ViewCast and evaluate it through both simulation and 3DTI sessions among tele-immersive environments residing in different institutes across the Internet2. Our experimental results demonstrate the implementation feasibility and performance enhancement of ViewCast in supporting the multi-party 3DTI collaboration.
We present a study of collaborative dancing between remote dancers in a tele-immersive environment which features 3D full and real body capturing, wide field of view, multi-display 3D rendering, and attachment free participant. We invite two professional dancers to perform collaborative dancing in the environment. The coordination requires one dancer to take the lead while the other follows by appropriate movement. Throughout the experiment, the dancers are dancing at various motion rates to evaluate how well the collaborative dancing is supported with the current technical boundary. Our important findings indicate that 1) tele-immersive environments have strong potential impact on the concept of choreography and communication of live dance performance, 2) the presence of multi-view display, real body 3D rendering, audio channel, and less intrusiveness greatly enhances the immersive and dancing experience, and 3) the level of synchronization achieved by the dancers is higher than that expected from the video rate.
Background Compensations are commonly observed in patients with stroke when they engage in reaching without supervision; these behaviors may be detrimental to long-term functional improvement. Automatic detection and reduction of compensation cab help patients perform tasks correctly and promote better upper extremity recovery. Objective Our first objective is to verify the feasibility of detecting compensation online using machine learning methods and pressure distribution data. Second objective was to investigate whether compensations of stroke survivors can be reduced by audiovisual or force feedback. The third objective was to compare the effectiveness of audiovisual and force feedback in reducing compensation. Methods Eight patients with stroke performed reaching tasks while pressure distribution data were recorded. Both the offline and online recognition accuracy were investigated to assess the feasibility of applying a support vector machine (SVM) based compensation detection system. During reduction of compensation, audiovisual feedback was delivered using virtual reality technology, and force feedback was delivered through a rehabilitation robot. Results Good classification performance was obtained in online compensation recognition, with an average F1-score of over 0.95. Based on accurate online detection, real-time feedback significantly decreased compensations of patients with stroke in comparison with no-feedback condition (p < 0.001). Meanwhile, the difference between audiovisual and force feedback was also significant (p < 0.001) and force feedback was more effective in reducing compensation in patients with stroke. Conclusions Accurate online recognition validated the feasibility of monitoring compensations using machine learning algorithms and pressure distribution data. Reliable online detection also paved the way for reducing compensations by providing feedback to patients with stroke. Our findings suggested that real-time feedback could be an effective approach to reducing compensatory patterns and force feedback demonstrated a more enviable potential compared with audiovisual feedback.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.