2017 International Conference on Orange Technologies (ICOT) 2017
DOI: 10.1109/icot.2017.8336088
|View full text |Cite
|
Sign up to set email alerts
|

Snoring and apnea detection based on hybrid neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 5 publications
0
9
0
Order By: Relevance
“…The samples also were annotated into snore & non-snore events, and were divided into training (11), validation (3), and test (6) sets. Authors in [12] used Olympus Noise canceling Microphone (ME52) hung at a height within 20-30cm above 24 volunteer's head to record throughout the night. The samples consist of three events: snore, apnea, and silence.…”
Section: Snore Data Created Through Subjectsmentioning
confidence: 99%
See 3 more Smart Citations
“…The samples also were annotated into snore & non-snore events, and were divided into training (11), validation (3), and test (6) sets. Authors in [12] used Olympus Noise canceling Microphone (ME52) hung at a height within 20-30cm above 24 volunteer's head to record throughout the night. The samples consist of three events: snore, apnea, and silence.…”
Section: Snore Data Created Through Subjectsmentioning
confidence: 99%
“…The approach was very effective and yielded an excellent result. Approach in [12] applied a linear prediction coding (LPC), MFCC and sub-band for feature extraction along with hybrids of deep neural networks (DNN + LSTM; CNN + LSTM) to classify snoring, apnea and, silence event. However, the classification accuracy for both hybrids were less than 90%.…”
Section: A Brief Survey Of Existing Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…An investigation of the developed systems that attempt to diagnose apnea syndrome reveals the dominance of neural networks in this field of sound classification [7]. Mainly depending on convolutional deep networks, the proposed systems classify the tracheal sound excerpts by revealing deep features in the spectrotemporal changes of the sound signal [8].…”
Section: Introductionmentioning
confidence: 99%