Background
Multimodal wearable technologies have brought forward wide possibilities in human activity recognition, and more specifically personalized monitoring of eating habits. The emerging challenge now is the selection of most discriminative information from high-dimensional data collected from multiple sources. The available fusion algorithms with their complex structure are poorly adopted to the computationally constrained environment which requires integrating information directly at the source. As a result, more simple low-level fusion methods are needed.
Objective
In the absence of a data combining process, the cost of directly applying high-dimensional raw data to a deep classifier would be computationally expensive with regard to the response time, energy consumption, and memory requirement. Taking this into account, we aimed to develop a data fusion technique in a computationally efficient way to achieve a more comprehensive insight of human activity dynamics in a lower dimension. The major objective was considering statistical dependency of multisensory data and exploring intermodality correlation patterns for different activities.
Methods
In this technique, the information in time (regardless of the number of sources) is transformed into a 2D space that facilitates classification of eating episodes from others. This is based on a hypothesis that data captured by various sensors are statistically associated with each other and the covariance matrix of all these signals has a unique distribution correlated with each activity which can be encoded on a contour representation. These representations are then used as input of a deep model to learn specific patterns associated with specific activity.
Results
In order to show the generalizability of the proposed fusion algorithm, 2 different scenarios were taken into account. These scenarios were different in terms of temporal segment size, type of activity, wearable device, subjects, and deep learning architecture. The first scenario used a data set in which a single participant performed a limited number of activities while wearing the Empatica E4 wristband. In the second scenario, a data set related to the activities of daily living was used where 10 different participants wore inertial measurement units while performing a more complex set of activities. The precision metric obtained from leave-one-subject-out cross-validation for the second scenario reached 0.803. The impact of missing data on performance degradation was also evaluated.
Conclusions
To conclude, the proposed fusion technique provides the possibility of embedding joint variability information over different modalities in just a single 2D representation which results in obtaining a more global view of different aspects of daily human activities at hand, and yet preserving the desired performance level in activity recognition.
Objective. Electroencephalogram (EEG) recordings often contain large segments with missing signals due to poor electrode contact or other artifact contamination. Recovering missing values, contaminated segments and lost channels could be highly beneficial, especially for automatic classification algorithms, such as machine/deep learning models, whose performance relies heavily on high-quality data. The current study proposes a new method for recovering missing segments in EEG. Approach. In the proposed method, the reconstructed segment is estimated by substitution of the missing part of the signal with the normalized weighted sum of other channels. The weighting process is based on inter-channel correlation of the non-missing preceding and proceeding temporal windows. The algorithm was designed to be computationally efficient. Experimental data from patients (N = 20) undergoing general anesthesia due to elective surgery were used for the validation of the algorithm. The data were recorded using a portable EEG device with ten channels and a self-adhesive frontal electrode during induction of anesthesia with propofol from waking state until burst suppression level, containing lots of variation in both amplitude and frequency properties. The proposed imputation technique was compared with another simple-structure technique. Distance correlation (DC) was used as a measure of comparison evaluation. Main results.:The proposed method with average distance correlation of 82.48±10.01 (µ ± σ)% outperformed its competitor with average distance correlation of 67.89±14.12 (µ ± σ)% . This algorithm also showed better performance for an increasing number of missing channels. Significance. the proposed technique provides an easy-to-implement and computationally efficient approach for the reliable reconstruction of missing or contaminated EEG segments.
Objective: When developing approaches for automatic preprocessing of electroencephalogram (EEG) signals in non-isolated demanding environment such as intensive care unit (ICU) or even outdoor environment, one of the major concerns is varying nature of characteristics of different artifacts in time, frequency and spatial domains, which in turn causes a simple approach to be not enough for reliable artifact removal. Considering this, current study aims to use correlation-driven mapping to improve artifact detection performance.Approach: A framework is proposed here for mapping signals from multichannel space (regardless of the number of EEG channels) into two-dimensional RGB space, in which the correlation of all EEG channels is simultaneously taken into account, and a deep convolutional neural network (CNN) model can then learn specific patterns in generated 2D representation related to specific artifact.Main results: The method with a classification accuracy of 92.30% (AUC=0.96) in a leave-three-subjects-out cross-validation procedure was evaluated using data including 2310 EEG sequences contaminated by artifacts and 2285 artifact-free EEG sequences collected with BrainStatus self-adhesive electrode and wireless amplifier from 15 intensive care patients. For further assessment, several scenarios were also tested including performance variation of proposed method under different segment lengths, different numbers of isoline and different numbers of channel. The results showed outperformance of CNN fed by correlation coefficients data over both spectrogram-based CNN and EEGNet on the same dataset.Significance: This study showed the feasibility of utilizing correlation image of EEG channels coupled with deep learning as a promising tool for dimensionality reduction, channels fusion and capturing various artifacts patterns in temporal-spatial domains. A simplified version of proposed approach was also shown to be feasible in real-time application with latency of 0.0181 s for making real-time decision.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.