The use of Convolutional Neural Networks (CNNs) as a feature learning method for Human Activity Recognition (HAR) is becoming more and more common. Unlike conventional machine learning methods, which require domain-specific expertise, CNNs can extract features automatically. On the other hand, CNNs require a training phase, making them prone to the cold-start problem. In this work, a case study is presented where the use of a pre-trained CNN feature extractor is evaluated under realistic conditions. The case study consists of two main steps: (1) different topologies and parameters are assessed to identify the best candidate models for HAR, thus obtaining a pre-trained CNN model. The pre-trained model (2) is then employed as feature extractor evaluating its use with a large scale real-world dataset. Two CNN applications were considered: Inertial Measurement Unit (IMU) and audio based HAR. For the IMU data, balanced accuracy was 91.98% on the UCI-HAR dataset, and 67.51% on the real-world Extrasensory dataset. For the audio data, the balanced accuracy was 92.30% on the DCASE 2017 dataset, and 35.24% on the Extrasensory dataset.
Data annotation is a time-consuming process posing major limitations to the development of Human Activity Recognition (HAR) systems. The availability of a large amount of labeled data is required for supervised Machine Learning (ML) approaches, especially in the case of online and personalized approaches requiring user specific datasets to be labeled. The availability of such datasets has the potential to help address common problems of smartphone-based HAR, such as inter-person variability. In this work, we present (i) an automatic labeling method facilitating the collection of labeled datasets in free-living conditions using the smartphone, and (ii) we investigate the robustness of common supervised classification approaches under instances of noisy data. We evaluated the results with a dataset consisting of 38 days of manually labeled data collected in free living. The comparison between the manually and the automatically labeled ground truth demonstrated that it was possible to obtain labels automatically with an 80–85% average precision rate. Results obtained also show how a supervised approach trained using automatically generated labels achieved an 84% f-score (using Neural Networks and Random Forests); however, results also demonstrated how the presence of label noise could lower the f-score up to 64–74% depending on the classification approach (Nearest Centroid and Multi-Class Support Vector Machine).
BackgroundTo meet the required hours of intensive intervention for treating children with autism spectrum disorder (ASD), we developed an automated serious gaming platform (11 games) to deliver intervention at home (GOLIAH) by mapping the imitation and joint attention (JA) subset of age-adapted stimuli from the Early Start Denver Model (ESDM) intervention. Here, we report the results of a 6-month matched controlled exploratory study.MethodsFrom two specialized clinics, we included 14 children (age range 5–8 years) with ASD and 10 controls matched for gender, age, sites, and treatment as usual (TAU). Participants from the experimental group received in addition to TAU four 30-min sessions with GOLIAH per week at home and one at hospital for 6 months. Statistics were performed using Linear Mixed Models.ResultsChildren and parents participated in 40% of the planned sessions. They were able to use the 11 games, and participants trained with GOLIAH improved time to perform the task in most JA games and imitation scores in most imitation games. GOLIAH intervention did not affect Parental Stress Index scores. At end-point, we found in both groups a significant improvement for Autism Diagnostic Observation Schedule scores, Vineland socialization score, Parental Stress Index total score, and Child Behavior Checklist internalizing, externalizing and total problems. However, we found no significant change for by time × group interaction.ConclusionsDespite the lack of superiority of TAU + GOLIAH versus TAU, the results are interesting both in terms of changes by using the gaming platform and lack of parental stress increase. A large randomized controlled trial with younger participants (who are the core target of ESDM model) is now discussed. This should be facilitated by computing GOLIAH for a web platform. Trial registration Clinicaltrials.gov NCT02560415
Autism Spectrum Disorders (ASD) are associated with physiological abnormalities, which are likely to contribute to the core symptoms of the condition. Wearable technologies can provide data in a semi-naturalistic setting, overcoming the limitations given by the constrained situations in which physiological signals are usually acquired. In this study an integrated system based on wearable technologies for the acquisition and analysis of neurophysiological and autonomic parameters during treatment is proposed and an application on five children with ASD is presented. Signals were acquired during a therapeutic session based on an imitation protocol in ASD children. Data were analyzed with the aim of extracting quantitative EEG (QEEG) features from EEG signals as well as heart rate and heart rate variability (HRV) from ECG. The system allowed evidencing changes in neurophysiological and autonomic response from the state of disengagement to the state of engagement of the children, evidencing a cognitive involvement in the children in the tasks proposed. The high grade of acceptability of the monitoring platform is promising for further development and implementation of the tool. In particular if the results of this feasibility study would be confirmed in a larger sample of subjects, the system proposed could be adopted in more naturalistic paradigms that allow real world stimuli to be incorporated into EEG/psychophysiological studies for the monitoring of the effect of the treatment and for the implementation of more individualized therapeutic programs.
Nowadays, information and communication technologies (ICT) have become part of our everyday life, enhancing the quality of life and promoting new forms of social interaction. Despite the numerous benefits of ICT, older adults still present low rates of ICT adoption compared to other population segments. The lack of accessible User Interfaces has been identified as a major barrier. Traditional User Interfaces follow a design for all approach, typically ignoring the needs of older adults. Recent research in Human-Computer Interaction (HCI) proposes adaptive User Interfaces to suit the individual users abilities. Nevertheless, most of the existing approaches perform adaptation based on user profile groups and do not provide personalized adaptation in real-time. This paper introduces a conceptual framework for developing real-time adaptive User Interfaces. The system aims to target most common issues among older adults, i.e. cognitive decline and vision loss. The developed conceptual framework also presents novel strategic techniques to assess cognitive load and vision related issues in an unobtrusive manner for the user.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.