The current clinical diagnosis of COVID-19 requires person-to-person contact, needs variable time to produce results, and is expensive. It is even inaccessible to the general population in some developing countries due to insufficient healthcare facilities. Hence, a low-cost, quick, and easily accessible solution for COVID-19 diagnosis is vital. This paper presents a study that involves developing an algorithm for automated and noninvasive diagnosis of COVID-19 using cough sound samples and a deep neural network. The cough sounds provide essential information about the behavior of glottis under different respiratory pathological conditions. Hence, the characteristics of cough sounds can identify respiratory diseases like COVID-19. The proposed algorithm consists of three main steps (a) extraction of acoustic features from the cough sound samples, (b) formation of a feature vector, and (c) classification of the cough sound samples using a deep neural network. The output from the proposed system provides a COVID-19 likelihood diagnosis. In this work, we consider three acoustic feature vectors, namely (a) time-domain, (b) frequency-domain, and (c) mixed-domain (i.e., a combination of features in both time-domain and frequency-domain). The performance of the proposed algorithm is evaluated using cough sound samples collected from healthy and COVID-19 patients. The results show that the proposed algorithm automatically detects COVID-19 cough sound samples with an overall accuracy of 89.2%, 97.5%, and 93.8% using time-domain, frequency-domain, and mixed-domain feature vectors, respectively. The proposed algorithm, coupled with its high accuracy, demonstrates that it can be used for quick identification or early screening of COVID-19. We also compare our results with that of some state-of-the-art works.
Voice disability is a barrier to effective communication. Around 1.2% of the World's population is facing some form of voice disability. Surgical procedures namely laryngoscopy, laryngeal electromyography, and stroboscopy are used for voice disability diagnosis. Researchers and practitioners have been working to find alternatives to these surgical procedures. Voice sample based diagnosis is one of them. The major steps followed by these works are (a) to extract voice features from voice samples and (b) to discriminate pathological voices from normal voices by using a classifier algorithm. However, there is no consensus about the voice feature and the classifier algorithm that can provide the best accuracy in screening voice disability. Moreover, some of the works use multiple voice features and multiple classifiers to ensure high reliability. In this paper, we address these issues. The motivation of the work is to address the need for non-invasive signal processing techniques to detect voice disability in the general population. This paper conducts a survey related to voice disability detection methods. The paper contains two main parts. In the first part, we present background information including causes of voice disability, current procedures and practices, voice features, and classifiers. In the second part, we present a comprehensive survey work on voice disability detection algorithms. The issues and challenges related to the selection of voice feature and classifier algorithms have been addressed at the end of this paper. INDEX TERMS Algorithms, issues and challenges, signal processing, surgical methods, survey, voice disability, voice features.
11-β hydroxysteroid dehydrogenase (11-βHSD1), tumor necrosis factor-α (TNF-α) and their role in obesity, regional adiposity and insulin resistance has been sparsely evaluated. We determined the polymorphic status of 11-βHSD1 4478T>G and TNF-α-308G>A in Asian Indians in north India. In this cross-sectional study (n = 498; 258 males, 240 females), association of genotypes (PCR–RFLP) of 11-βHSD1 and TNF-α were analyzed with obesity [BMI ≥ 25 kg/m(2), percentage body fat (%BF by DEXA); subcutaneous and intra-abdominal fat area (L(2-3) level by single slice MRI) in a sub sample] and insulin resistance. 46 percent subjects had generalized obesity, 55 % abdominal obesity and 23.8 % were insulin resistant. Frequencies (%) of [T/T] and [T/G] genotypes of 11-βHSD1 were 89.57 and 10.43 respectively. Homozygosity for 11-βHSD1 4478G/G was absent with no association with parameters of obesity and insulin resistance. Frequencies (%) of TNF-α [G] and [A] alleles were 88 and 12 respectively. Higher frequency of variant -308[A/A] was observed in females versus males (p = 0.01). Females with at least one single A allele of TNF-α-308G>A had significantly high %BF and total skinfold, whereas higher values of waist hip ratio, total cholesterol, triglycerides and VLDL were observed in males. Subjects with even a single A allele in TNF-α genotype showed higher subscapular skinfold predisposing them to truncal subcutaneous adiposity (p = 0.02). Our findings of association of TNF-α-308G>A variant in females with obesity indices suggests a gender-specific role of this polymorphism in obesity. High truncal subcutaneous adiposity is associated with A allele of TNF-α-308G>A in this population.
This paper presents a pathological voice identification system employing signal processing techniques through cochlear implant models. The fundamentals of the biological process for speech perception are investigated to develop this technique. Two cochlear implant models are considered in this work: one uses a conventional bank of bandpass filters, and the other one uses a bank of optimized gammatone filters. The critical center frequencies of those filters are selected to mimic the human cochlear vibration patterns caused by audio signals. The proposed system processes the speech samples and applies a CNN for final pathological voice identification. The results show that the two proposed models adopting bandpass and gammatone filterbanks can discriminate the pathological voices from healthy ones, resulting in F1 scores of 77.6% and 78.7%, respectively, with speech samples. The obtained results of this work are also compared with those of other related published works.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.