Visual evoked potential (VEP) has been used as an alternative method to assess visual acuity objectively, especially in non-verbal infants and adults with low intellectual abilities or malingering. By sweeping the spatial frequency of visual stimuli and recording the corresponding VEP, VEP acuity can be defined by analyzing electroencephalography (EEG) signals. This paper presents a review on the VEP-based visual acuity assessment technique, including a brief overview of the technique, the effects of the parameters of visual stimuli, and signal acquisition and analysis of the VEP acuity test, and a summary of the current clinical applications of the technique. Finally, we discuss the current problems in this research domain and potential future work, which may enable this technique to be used more widely and quickly, deepening the VEP and even electrophysiology research on the detection and diagnosis of visual function.
The steady-state visual evoked potential (SSVEP) visual acuity is usually defined by extrapolating a straight line regressed through significant SSVEP amplitudes plotted versus spatial frequencies to 0 µV or a noise level floor, or the finest spatial frequency evoking a significant SSVEP. This study aimed to compare the performance of the commonly used threshold determination criteria of the extrapolation technique and the finest spatial frequency technique. Visual acuity was measured both by the Freiburg Visual Acuity Test (FrACT) and SSVEP with vertical sinusoidal reversal gratings in ten adults. The extrapolation technique including three methods of linear extrapolation to zero (C1), linear extrapolation to noise level baseline (C2) and linear extrapolation to zero versus log spatial frequency (C3), and the finest spatial frequency technique with significance determination by canonical correlation analysis (CCA) and "OR" operation (C4) were used to determine the SSVEP visual acuity. Bland-Altman method found a pretty good agreement between the SSVEP and FrACT acuity obtained by all the four threshold estimation criteria. One-way repeated-measures ANOVA and Bonferroni post-hoc analysis found that there was no significant difference among visual acuities measured by FrACT and all the four criteria, except for the visual acuity estimated by C1 slightly higher than that of C2, demonstrating that these visual acuity estimating methods had a similar performance in evaluating the visual function. The correlation and agreement between subjective FrACT acuity and objective SSVEP acuity measured by four criteria respectively were all pretty good, demonstrating that all of these four threshold estimation criteria had a good performance in SSVEP visual acuity assessment.
In the process of brain-computer interface (BCI), variations across sessions/subjects result in differences in the properties of potential of the brain. This issue may lead to variations in feature distribution of electroencephalogram (EEG) across subjects, which greatly reduces the generalization ability of a classifier. Although subject-dependent (SD) strategy provides a promising way to solve the problem of personalized classification, it cannot achieve expected performance due to the limitation of the amount of data especially for a deep neural network (DNN) classification model. Herein, we propose an instance transfer subject-independent (ITSD) framework combined with a convolutional neural network (CNN) to improve the classification accuracy of the model during motor imagery (MI) task. The proposed framework consists of the following steps. Firstly, an instance transfer learning based on the perceptive Hash algorithm is proposed to measure similarity of spectrogram EEG signals between different subjects. Then, we develop a CNN to decode these signals after instance transfer learning. Next, the performance of classifications by different training strategies (subject-independent- (SI-) CNN, SD-CNN, and ITSD-CNN) are compared. To verify the effectiveness of the algorithm, we evaluate it on the dataset of BCI competition IV-2b. Experiments show that the instance transfer learning can achieve positive instance transfer using a CNN classification model. Among the three different training strategies, the average classification accuracy of ITSD-CNN can achieve 94.7±2.6 and obtain obvious improvement compared with a contrast model p<0.01. Compared with other methods proposed in previous research, the framework of ITSD-CNN outperforms the state-of-the-art classification methods with a mean kappa value of 0.664.
Nowadays, more people tend to go to bed late and spend their sleep time with various electronic devices. At the same time, the BCI (brain–computer interface) rehabilitation equipment uses a visual display, thus it is necessary to evaluate the problem of visual fatigue to avoid the impact on the training effect. Therefore, it is very important to understand the impact of using electronic devices in a dark environment at night on human visual fatigue. This paper uses Matlab to write different color paradigm stimulations, uses a 4K display with an adjustable screen brightness to jointly design the experiment, uses eye tracker and g.tec Electroencephalogram (EEG) equipment to collect the signal, and then carries out data processing and analysis, finally obtaining the influence of the combination of different colors and different screen brightness on human visual fatigue in a dark environment. In this study, subjects were asked to evaluate their subjective (Likert scale) perception, and objective signals (pupil diameter, θ + α frequency band data) were collected in a dark environment (<3 lx). The Likert scale showed that a low screen brightness in the dark environment could reduce the visual fatigue of the subjects, and participants preferred blue to red. The pupil data revealed that visual perception sensitivity was more vulnerable to stimulation at a medium and high screen brightness, which is easier to deepen visual fatigue. EEG frequency band data concluded that there was no significant difference between paradigm colors and screen brightness on visual fatigue. On this basis, this paper puts forward a new index—the visual anti-fatigue index, which provides a valuable reference for the optimization of the indoor living environment, the improvement of satisfaction with the use of electronic equipment and BCI rehabilitation equipment, and the protection of human eyes.
Objective. This study aimed to explore an online, real-time, and precise method to assess steady-state visual evoked potential (SSVEP)-based visual acuity more rapidly and objectively with self-adaptive spatial frequency steps. Approach. Taking the vertical sinusoidal reversal gratings with different spatial frequencies and temporal frequencies as the visual stimuli, according to the psychometric function for visual acuity assessment, a self-adaptive procedure, the best parameter estimation by sequential testing algorithm, was used to calculate the spatial frequency sequence based on all the previous spatial frequencies and their significance of the SSVEP response. Simultaneously, the canonical correlation analysis (CCA) method with a signal-to-noise ratio (SNR) significance detection criterion was used to judge the significance of the SSVEP response. Main results. After 18 iterative trails, the spatial frequency to be presented converged to a value, which was exactly defined as the SSVEP visual acuity threshold. Our results indicated that this SSVEP acuity had a good agreement and correlation with subjective Freiburg Visual Acuity and Contrast Test acuity, and the test–retest repeatability was also good. Significance. The self-adaptive step SSVEP procedure combined with the CCA method and SNR significance detection criterion appears to be an alternative method in the real-time SSVEP acuity test to obtain objective visual acuity more rapidly and precisely.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.