Future health systems require the means to assess and track the neural and physiological function of a user over long periods of time, and in the community. Human body responses are manifested through multiple, interacting modalities – the mechanical, electrical and chemical; yet, current physiological monitors (e.g. actigraphy, heart rate) largely lack in cross-modal ability, are inconvenient and/or stigmatizing. We address these challenges through an inconspicuous earpiece, which benefits from the relatively stable position of the ear canal with respect to vital organs. Equipped with miniature multimodal sensors, it robustly measures the brain, cardiac and respiratory functions. Comprehensive experiments validate each modality within the proposed earpiece, while its potential in wearable health monitoring is illustrated through case studies spanning these three functions. We further demonstrate how combining data from multiple sensors within such an integrated wearable device improves both the accuracy of measurements and the ability to deal with artifacts in real-world scenarios.
The use of EEG as a biometrics modality has been investigated for about a decade, however its feasibility in real-world applications is not yet conclusively established, mainly due to the issues with collectability and reproducibility. To this end, we propose a readily deployable EEG biometrics system based on a 'one-fits-all' viscoelastic generic in-ear EEG sensor (collectability), which does not require skilled assistance or cumbersome preparation. Unlike most existing studies, we consider data recorded over multiple recording days and for multiple subjects (reproducibility) while, for rigour, the training and test segments are not taken from the same recording days. A robust approach is considered based on the resting state with eyes closed paradigm, the use of both parametric (autoregressive model) and non-parametric (spectral) features, and supported by simple and fast cosine distance, linear discriminant analysis and support vector machine classifiers. Both the verification and identification forensics scenarios are considered and the achieved results are on par with the studies based on impractical on-scalp recordings. Comprehensive analysis over a number of subjects, setups, and analysis features demonstrates the feasibility of the proposed ear-EEG biometrics, and its potential in resolving the critical collectability, robustness, and reproducibility issues associated with current EEG biometrics.• Recording scalp-EEG with multiple electrodes is timeconsuming to set-up and cumbersome to wear. Such a
Objective: Advances in sensor miniaturisation and computational power have served as enabling technologies for monitoring human physiological conditions in real-world scenarios. Sleep disruption may impact neural function, and can be a symptom of both physical and mental disorders. This study proposes wearable in-ear electroencephalography (ear-EEG) for overnight sleep monitoring as a 24/7 continuous and unobtrusive technology for sleep quality assessment in the community. Methods: Twenty-two healthy participants took part in overnight sleep monitoring with simultaneous ear-EEG and conventional full polysomnography (PSG) recordings. The ear-EEG data were analysed in the both structural complexity and spectral domains; the extracted features were used for automatic sleep stage prediction through supervised machine learning, whereby the PSG data were manually scored by a sleep clinician. Results: The agreement between automatic sleep stage prediction based on ear-EEG from a single in-ear sensor and the hypnogram based on the full PSG was 74.1 % in the accuracy over five sleep stage classification; this is supported by a Substantial Agreement in the kappa metric (0.61). Conclusion: The in-ear sensor is both feasible for monitoring overnight sleep outside the sleep laboratory and mitigates technical difficulties associated with PSG. It therefore represents a 24/7 continuously wearable alternative to conventional cumbersome and expensive sleep monitoring. Significance: The 'standardised' one-size-fits-all viscoelastic in-ear sensor is a next generation solution to monitor sleep-this technology promises to be a viable method for readily wearable sleep monitoring in the community, a key to affordable healthcare and future eHealth.
The monitoring of sleep patterns without patient’s inconvenience or involvement of a medical specialist is a clinical question of significant importance. To this end, we propose an automatic sleep stage monitoring system based on an affordable, unobtrusive, discreet, and long-term wearable in-ear sensor for recording the electroencephalogram (ear-EEG). The selected features for sleep pattern classification from a single ear-EEG channel include the spectral edge frequency and multi-scale fuzzy entropy, a structural complexity feature. In this preliminary study, the manually scored hypnograms from simultaneous scalp-EEG and ear-EEG recordings of four subjects are used as labels for two analysis scenarios: 1) classification of ear-EEG hypnogram labels from ear-EEG recordings; and 2) prediction of scalp-EEG hypnogram labels from ear-EEG recordings. We consider both 2-class and 4-class sleep scoring, with the achieved accuracies ranging from 78.5% to 95.2% for ear-EEG labels predicted from ear-EEG, and 76.8% to 91.8% for scalp-EEG labels predicted from ear-EEG. The corresponding Kappa coefficients range from 0.64 to 0.83 for Scenario 1, and indicate substantial to almost perfect agreement, while for Scenario 2 the range of 0.65–0.80 indicates substantial agreement, thus further supporting the feasibility of in-ear sensing for sleep monitoring in the community.
Automatic sleep stage classification is an important paradigm in computational intelligence and promises considerable advantages to the health care. Most current automated methods require the multiple electroencephalogram (EEG) channels and typically cannot distinguish the S1 sleep stage from EEG. The aim of this study is to revisit automatic sleep stage classification from EEGs using complexity science methods. The proposed method applies fuzzy entropy and permutation entropy as kernels of multi-scale entropy analysis. To account for sleep transition, the preceding and following 30 seconds of epoch data were used for analysis as well as the current epoch. Combining the entropy and spectral edge frequency features extracted from one EEG channel, a multi-class support vector machine (SVM) was able to classify 93.8% of 5 sleep stages for the SleepEDF database [expanded], with the sensitivity of S1 stage was 49.1%. Also, the Kappa's coefficient yielded 0.90, which indicates almost perfect agreement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.