2020
DOI: 10.1007/s10916-020-01627-1
|View full text |Cite
|
Sign up to set email alerts
|

AutoAudio: Deep Learning for Automatic Audiogram Interpretation

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(12 citation statements)
references
References 25 publications
0
12
0
Order By: Relevance
“…Additional advantages include its flexibility for expansion to bone conduction, speech perception, and masking. Crowson et al ( 72 ) utilized deep learning in the form of “Auto Audio,” a proof-of-concept model to interpret diagnostic audiograms. Audiograms consisting of various hearing loss types (e.g., conductive, sensorineural, mixed) were used to train several neural networks.…”
Section: Tele-audiology Servicesmentioning
confidence: 99%
“…Additional advantages include its flexibility for expansion to bone conduction, speech perception, and masking. Crowson et al ( 72 ) utilized deep learning in the form of “Auto Audio,” a proof-of-concept model to interpret diagnostic audiograms. Audiograms consisting of various hearing loss types (e.g., conductive, sensorineural, mixed) were used to train several neural networks.…”
Section: Tele-audiology Servicesmentioning
confidence: 99%
“…Lee et al [11] leverage K-means clustering to categorize audiogram shapes into six basic types and five subtypes. Crowson et al [12] employ ResNet-101 to differentiate audiograms of individuals with conductive, sensorineural, mixed, or no hearing loss. Nonetheless, all these methods only give a rough summary of certain properties of audiograms.…”
Section: Audiogram Classificationmentioning
confidence: 99%
“…While there have been attempts to directly classify the types of hearing loss (if any) from audiogram images [10,11,12], they are only able to provide class-based qualitative descriptions of audiograms. They cannot recover the full information, specifically the precise hearing level at different frequencies, which are crucial for tuning hearing aids.…”
Section: Introductionmentioning
confidence: 99%
“…88 Automation can assist clinicians and patients to interpret the measurement by data-driven automated reporting of accuracy and reliability (including signalling for suspicious outcomes) such as QUALIND ®53 or by automated classification for diagnostic purposes (including type and degree of hearing loss). Examples of automated classification include AMCLASS, 89 Autoaudio, 90 and data-driven audiogram classification. 91…”
Section: Accuracymentioning
confidence: 99%