“…In the tinnitus literature, the classification of brain data has often been done using different ML methods. One of the commonly used method is the Support Vector Machine (SVM) which is based on a supervised learning to detect the relationship between the data samples and their class label information ( 163 , 170 , 171 ). SVM learns from data to assign a hyperplane in an optimal position in the data space such that the samples are best separated with respect to their classes ( 159 ).…”
Section: Resultsmentioning
confidence: 99%
“…Although EEG is widely applied to tinnitus research, few ML methods have been developed to classify tinnitus patients from healthy people using EEG. Sun et al ( 171 ) proposed a multi-view intact space learning method to distinguish EEG signals and classify the tinnitus patients from healthy people using a SVM classifier; with accuracy of 99%. Monaghan et al ( 172 ) applied SVM techniques to classify (at the individual level) tinnitus from healthy people, based on their Auditory Brainstem Responses.…”
Background: Digital processing has enabled the development of several generations of technology for tinnitus therapy. The first digital generation was comprised of digital Hearing Aids (HAs) and personal digital music players implementing already established sound-based therapies, as well as text based information on the internet. In the second generation Smart-phone applications (apps) alone or in conjunction with HAs resulted in more therapy options for users to select from. The 3rd generation of digital tinnitus technologies began with the emergence of many novel, largely neurophysiologically-inspired, treatment theories that drove development of processing; enabled through HAs, apps, the internet and stand-alone devices. We are now of the cusp of a 4th generation that will incorporate physiological sensors, multiple transducers and AI to personalize therapies.Aim: To review technologies that will enable the next generations of digital therapies for tinnitus.Methods: A “state-of-the-art” review was undertaken to answer the question: what digital technology could be applied to tinnitus therapy in the next 10 years? Google Scholar and PubMed were searched for the 10-year period 2011–2021. The search strategy used the following key words: “tinnitus” and [“HA,” “personalized therapy,” “AI” (and “methods” or “applications”), “Virtual reality,” “Games,” “Sensors” and “Transducers”], and “Hearables.” Snowballing was used to expand the search from the identified papers. The results of the review were cataloged and organized into themes.Results: This paper identified digital technologies and research on the development of smart therapies for tinnitus. AI methods that could have tinnitus applications are identified and discussed. The potential of personalized treatments and the benefits of being able to gather data in ecologically valid settings are outlined.Conclusions: There is a huge scope for the application of digital technology to tinnitus therapy, but the uncertain mechanisms underpinning tinnitus present a challenge and many posited therapeutic approaches may not be successful. Personalized AI modeling based on biometric measures obtained through various sensor types, and assessments of individual psychology and lifestyles should result in the development of smart therapy platforms for tinnitus.
“…In the tinnitus literature, the classification of brain data has often been done using different ML methods. One of the commonly used method is the Support Vector Machine (SVM) which is based on a supervised learning to detect the relationship between the data samples and their class label information ( 163 , 170 , 171 ). SVM learns from data to assign a hyperplane in an optimal position in the data space such that the samples are best separated with respect to their classes ( 159 ).…”
Section: Resultsmentioning
confidence: 99%
“…Although EEG is widely applied to tinnitus research, few ML methods have been developed to classify tinnitus patients from healthy people using EEG. Sun et al ( 171 ) proposed a multi-view intact space learning method to distinguish EEG signals and classify the tinnitus patients from healthy people using a SVM classifier; with accuracy of 99%. Monaghan et al ( 172 ) applied SVM techniques to classify (at the individual level) tinnitus from healthy people, based on their Auditory Brainstem Responses.…”
Background: Digital processing has enabled the development of several generations of technology for tinnitus therapy. The first digital generation was comprised of digital Hearing Aids (HAs) and personal digital music players implementing already established sound-based therapies, as well as text based information on the internet. In the second generation Smart-phone applications (apps) alone or in conjunction with HAs resulted in more therapy options for users to select from. The 3rd generation of digital tinnitus technologies began with the emergence of many novel, largely neurophysiologically-inspired, treatment theories that drove development of processing; enabled through HAs, apps, the internet and stand-alone devices. We are now of the cusp of a 4th generation that will incorporate physiological sensors, multiple transducers and AI to personalize therapies.Aim: To review technologies that will enable the next generations of digital therapies for tinnitus.Methods: A “state-of-the-art” review was undertaken to answer the question: what digital technology could be applied to tinnitus therapy in the next 10 years? Google Scholar and PubMed were searched for the 10-year period 2011–2021. The search strategy used the following key words: “tinnitus” and [“HA,” “personalized therapy,” “AI” (and “methods” or “applications”), “Virtual reality,” “Games,” “Sensors” and “Transducers”], and “Hearables.” Snowballing was used to expand the search from the identified papers. The results of the review were cataloged and organized into themes.Results: This paper identified digital technologies and research on the development of smart therapies for tinnitus. AI methods that could have tinnitus applications are identified and discussed. The potential of personalized treatments and the benefits of being able to gather data in ecologically valid settings are outlined.Conclusions: There is a huge scope for the application of digital technology to tinnitus therapy, but the uncertain mechanisms underpinning tinnitus present a challenge and many posited therapeutic approaches may not be successful. Personalized AI modeling based on biometric measures obtained through various sensor types, and assessments of individual psychology and lifestyles should result in the development of smart therapy platforms for tinnitus.
“…Some efforts aim to distinguish tinnitus patients from control subjects by machine learning. Sun et al [20] extracted Principal Components Analysis (PCA), Fast Fourier Transformation (FFT), and frequency-domain statistical features for analysis. Similarly, Li et al [18] preprocessed data in the frequency domain, and further extracted the features by cosine mapping and main-phase computing.…”
Section: Related Workmentioning
confidence: 99%
“…Both of the works received good performance in the experiments. However, these studies [18], [20] were subject-dependent, which meant that some of the test samples come from the same subjects in training. Then, short-time sampling from the same subjects would produce some similar samples, so subject-dependent experiments may contain similar samples in both train and test samples, which would overestimate the performance of models.…”
Section: Related Workmentioning
confidence: 99%
“…Compared with previous traditional auditory tests [13], [14] that investigate patients tinnitus based on cognitive judgement of patients, auditory brainstem response (ABR) recorded through electroencephalogram (EEG) allows obtaining realtime numerical feedback from the nervous system using noninvasive wearable devices. While the neurofeedback can be an effective data source for experts-who analyze and decide the proper sound treatment for patients manually [15], [16], [17]machine learning and deep learning methods, e.g., support vector machine (SVM) [18], neural network [19], [20], and autoencoder [21], have achieved extraordinary performance in EEG-based neurofeedback analysis. Recently, generative models have shown the potential for overcoming subject variances in tinnitus neurofeedback analysis [22], [23], given its capability in domain alignment and domain transfer.…”
Electroencephalogram (EEG)-based neurofeedback has been widely studied for tinnitus therapy in recent years. Most existing research relies on experts' cognitive prediction, and studies based on machine learning and deep learning are either data-hungry or not well generalizable to new subjects. In this paper, we propose a robust, data-efficient model for distinguishing tinnitus from the healthy state based on EEGbased tinnitus neurofeedback. We propose trend descriptor, a feature extractor with lower fineness, to reduce the effect of electrode noises on EEG signals, and a siamese encoder-decoder network boosted in a supervised manner to learn accurate alignment and to acquire high-quality transferable mappings across subjects and EEG signal channels. Our experiments show the proposed method significantly outperforms state-of-the-art algorithms when analyzing subjects' EEG neurofeedback to 90dB and 100dB sound, achieving an accuracy of 91.67%-94.44% in predicting tinnitus and control subjects in a subject-independent setting. Our ablation studies on mixed subjects and parameters show the method's stability in performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.