2020
DOI: 10.48550/arxiv.2011.00196
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

RespireNet: A Deep Neural Network for Accurately Detecting Abnormal Lung Sounds in Limited Data Setting

Abstract: Auscultation of respiratory sounds is the primary tool for screening and diagnosing lung diseases. Automated analysis, coupled with digital stethoscopes, can play a crucial role in enabling tele-screening of fatal lung diseases. Deep neural networks (DNNs) have shown a lot of promise for such problems, and are an obvious choice. However, DNNs are extremely data hungry, and the largest respiratory dataset ICBHI [21] has only 6898 breathing cycles, which is still small for training a satisfactory DNN model. In t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 22 publications
0
9
0
Order By: Relevance
“…The most common approach is placing the device on the chest area, like a typical auscultation device. This type of data acquisition can be conducted by digital stethoscopes, and the authors in [ 47 , 52 , 53 , 56 , 58 , 59 , 60 , 61 , 62 , 63 , 66 ] utilized data which were derived from this kind of data retrieval methodology. Most of the studies mention that they did not execute the data acquisition phase themselves, but they exploited open-source or proprietary datasets.…”
Section: Lung and Breath Sounds Analysis Of Lower Respiratory Symptomsmentioning
confidence: 99%
See 1 more Smart Citation
“…The most common approach is placing the device on the chest area, like a typical auscultation device. This type of data acquisition can be conducted by digital stethoscopes, and the authors in [ 47 , 52 , 53 , 56 , 58 , 59 , 60 , 61 , 62 , 63 , 66 ] utilized data which were derived from this kind of data retrieval methodology. Most of the studies mention that they did not execute the data acquisition phase themselves, but they exploited open-source or proprietary datasets.…”
Section: Lung and Breath Sounds Analysis Of Lower Respiratory Symptomsmentioning
confidence: 99%
“…This results in an increase in the models’ generalization capability, while also expanding the field of application where the trained algorithm can be used. Data constructed by this methodology are employed in [ 44 , 51 , 54 , 55 , 57 , 61 , 62 , 67 , 68 ]. On the other hand, only a limited number of studies have employed smartphone recordings as a means of data collection, with one notable example being [ 63 ], which utilized various smartphones to record lung sounds.…”
Section: Lung and Breath Sounds Analysis Of Lower Respiratory Symptomsmentioning
confidence: 99%
“…This binary classification task is to detect whether a breathing sound segment contains abnormalities, including crackle and wheeze. We use the backbone of a deep convolutional model (ResNet) proposed in (Gairola et al 2020). Dropout (p = 0.5) and batch normalisation are leveraged to reduce over-fitting.…”
Section: Appendix Task Implementationmentioning
confidence: 99%
“…We observe a different frequency response across devices which results in a performance degradation for underrepresented devices. Hence, we calibrate the features of the audio segments by applying spectrum correction instead of training or fine-tuning the model for a specific device [31], [32]. The spectrum correction or calibration proposed in [33], which was first applied for acoustic scene classification, scales the frequency response of the recording devices.…”
Section: B Spectrum Correctionmentioning
confidence: 99%
“…al. [32] proposed a RespireNet model based on ResNet34 and fully connected layers with a set of techniques i.e. device specific fine-tuning, concatenationbased augmentation, blank region clipping and smart padding to improve the accuracy.…”
Section: A Lung Sound Classification On Icbhi Datasetmentioning
confidence: 99%