2019
DOI: 10.1080/21681163.2019.1699165
|View full text |Cite
|
Sign up to set email alerts
|

A deep convolutional neural network for automated vestibular disorder classification using VNG analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 17 publications
0
9
0
Order By: Relevance
“…Although neural networks have previously been applied to several tasks involving eye-movement signals, such as classifying normal versus abnormal nystagmus during caloric tests [16] and detecting saccades [34], this study is the first example of 1D CNNs applied to the task of detecting entire nystagmus waveforms from within hours of normal eye-movement data. While heuristic approaches to detecting optokinetic nystagmus have been shown to yield high levels of classification accuracy (89.13% sensitivity and 98.54% specificity in [10], and 93% accuracy in [12]), these results are not comparable with our study as the data was captured during optokinetic tests and are extremely short in duration (8 seconds each in [10], compared to up to 24 hours in our longest example and almost an hour in the shortest).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Although neural networks have previously been applied to several tasks involving eye-movement signals, such as classifying normal versus abnormal nystagmus during caloric tests [16] and detecting saccades [34], this study is the first example of 1D CNNs applied to the task of detecting entire nystagmus waveforms from within hours of normal eye-movement data. While heuristic approaches to detecting optokinetic nystagmus have been shown to yield high levels of classification accuracy (89.13% sensitivity and 98.54% specificity in [10], and 93% accuracy in [12]), these results are not comparable with our study as the data was captured during optokinetic tests and are extremely short in duration (8 seconds each in [10], compared to up to 24 hours in our longest example and almost an hour in the shortest).…”
Section: Discussionmentioning
confidence: 99%
“…Such approaches, while effective when applied to short-term data that are known to contain nystagmus, can be slow to process large quantities of data and may produce many false positive detections when applied to highly variable long-term data. 1D Convolutional Neural Networks (CNNs) have also been used to classify diseased versus healthy induced nystagmus signals captured using video goggles in clinical settings [16]. Despite this technique not being used to identify or confirm the presence of nystagmus, it is reassuring that it achieved a classification accuracy of 96.36% for discriminating signals from healthy people with those from patients suffering from Vestibular Neuritis and Ménière's disease.…”
Section: Introductionmentioning
confidence: 99%
“…Several classification methods can be employed for pattern recognition. Nevertheless, for multiple categorization works [32,33], it has been confirmed that DNN presents a highquality tool. To acquire authentic classification results, we have selected the PG region of different images for assembling the training dataset of the network.…”
Section: Classification Of Pg Lesion Using Dbn-dnn Classifiermentioning
confidence: 98%
“…Slama et al [27], [28] used a multilayer neural network (MNN) and the recorded parameters from Video Nysta Geographic (VNG) data to analyze the nystagmus to diagnose whether one person has a vestibular disorder or normal. Nystagmus signal from Videonystagmography (VNG) device has been analyzed by CNN-based method to classify two classes of vestibular disorder [29]. Lim et al [30] introduced a more complete approach that does not just only classify the nystagmus, but also diagnoses the final BPPV class using various ad-hoc techniques.…”
Section: A Bppv Analysismentioning
confidence: 99%