Adaptive ultrasound reflectometry methods for lubrication film thickness measurements is of great use for condition monitoring and prognostics of systems which have high repair costs and are remotely located, such as off-shore systems, as they recursively calibrate the incident ultrasound wave. Typical manual calibration requires a constant incident wave over the life-cycle of the system, or until new manual calibration can be conducted. Auto-calibration accounts for the changes in the incident ultrasound wave caused by changing environmental conditions, occurring over longer periods of time. The vision of adaptive ultrasound reflectometry methods is therefore increased robustness of lubrication film thickness measurements in a range of applications. In this article an adaptive scheme is proposed. The scheme is based on a thin-layer time-of-flight method for thickness determination, and an extended Kalman filter for estimation of the incident wave spectrum. The adaptive scheme is experimentally tested, and the feasibility of the algorithm is established, but serious issues regarding the robustness and reliability of the method are revealed by a disturbance analysis. However, the experiments and a theoretical layer phase-lag sensitivity analysis reveal that the estimation of the incident wave phase is of high importance for layer thicknesses above m, and for very thin layers below m the estimation of the magnitude dominates the measurement accuracy. This entails that the research in adaptive schemes should be directed towards phase- or magnitude-tracking performance, depending on the working range of the layer thickness, such that sufficient robustness and reliability of the algorithms can be assured.
Brain-computer interfaces (BCIs) have been proven to be useful for stroke rehabilitation, but there are a number of factors that impede the use of this technology in rehabilitation clinics and in home-use, the major factors including the usability and costs of the BCI system. The aims of this study were to develop a cheap 3D-printed wrist exoskeleton that can be controlled by a cheap open source BCI (OpenViBE), and to determine if training with such a setup could induce neural plasticity. Eleven healthy volunteers imagined wrist extensions, which were detected from single-trial electroencephalography (EEG), and in response to this, the wrist exoskeleton replicated the intended movement. Motor-evoked potentials (MEPs) elicited using transcranial magnetic stimulation were measured before, immediately after, and 30 min after BCI training with the exoskeleton. The BCI system had a true positive rate of 86 ± 12% with 1.20 ± 0.57 false detections per minute. Compared to the measurement before the BCI training, the MEPs increased by 35 ± 60% immediately after and 67 ± 60% 30 min after the BCI training. There was no association between the BCI performance and the induction of plasticity. In conclusion, it is possible to detect imaginary movements using an open-source BCI setup and control a cheap 3D-printed exoskeleton that when combined with the BCI can induce neural plasticity. These findings may promote the availability of BCI technology for rehabilitation clinics and home-use. However, the usability must be improved, and further tests are needed with stroke patients.
Individuals with severe tetraplegia can benefit from brain-computer interfaces (BCIs). While most movement-related BCI systems focus on right/left hand and/or foot movements, very few studies have considered tongue movements to construct a multiclass BCI. The aim of this study was to decode four movement directions of the tongue (left, right, up, and down) from single-trial pre-movement EEG and provide a feature and classifier investigation. In offline analyses (from ten healthy participants) detection and classification were performed using temporal, spectral, entropy, and template features classified using either a linear discriminative analysis, support vector machine, random forest or multilayer perceptron classifiers. Besides the 4class classification scenario, all possible 3-, and 2-class scenarios were tested to find the most discriminable movement type. The linear discriminant analysis achieved on average, higher classification accuracies for both movement detection and classification. The right-and down tongue movements provided the highest and lowest detection accuracy (95.3±4.3% and 91.7±4.8%), respectively. The 4-class classification achieved an accuracy of 62.6±7.2%, while the best 3-class classification (using left, right, and up movements) and 2-class classification (using left and right movements) achieved an accuracy of 75.6±8.4% and 87.7±8.0%, respectively. Using only a combination of the temporal and template feature groups provided further classification accuracy improvements. Presumably, this is because these feature groups utilize the movement-related cortical potentials, which are noticeably different on the left-versus right brain hemisphere for the different movements. This study shows that the cortical representation of the tongue is useful for extracting control signals for multi-class movement detection BCIs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.