Objectives.-A confluence of recent developments in cloud computing, real-time web audio and machine learning psychometric function estimation has made wide dissemination of sophisticated turn-key audiometric assessments possible. The authors have combined these capabilities into an online (i.e., web-based) pure-tone audiogram estimator intended to empower researchers and clinicians with advanced hearing tests without the need for custom programming. The objective of this study is to assess the accuracy and reliability of this new online machine learning audiogram method relative to a commonly used hearing threshold estimation technique also implemented online for the first time in the same platform.Design.-The authors performed air-conduction pure-tone audiometry on 21 participants between the ages of 19 and 79 years (mean 41, standard deviation 21) exhibiting a wide range of hearing abilities. For each ear, two repetitions of online machine learning audiogram estimation and two repetitions of online modified Hughson-Westlake ascending-descending audiogram estimation were acquired by an audiologist using the online software tools. The estimated hearing thresholds of these two techniques were compared at standard audiogram frequencies (i.e., 0.25, 0.5, 1, 2, 4, 8 kHz).Results.-The two threshold estimation methods delivered very similar threshold estimates at standard audiogram frequencies. Specifically, the mean absolute difference between threshold estimates was 3.24 ± 5.15 dB. The mean absolute differences between repeated measurements of the online machine learning procedure and between repeated measurements of the Hughson-Westlake procedure were 2.85 ± 6.57 dB and 1.88 ± 3.56 respectively. The machine learning method generated estimates of both threshold and spread (i.e., the inverse of psychometric slope)
We present a case study of porting NekBone, a skeleton version of the Nek5000 code, to a parallel GPU-accelerated system. Nek5000 is a computational fluid dynamics code based on the spectral element method used for the simulation of incompressible flow. The original NekBone Fortran source code has been used as the base and enhanced by OpenACC directives. The profiling of NekBone provided an assessment of the suitability of the code for GPU systems, and indicated possible kernel optimizations. To port NekBone to GPU systems required little effort and a small number of additional lines of code (approximately one OpenACC directive per 1000 lines of code). The naïve implementation using OpenACC leads to little performance improvement: on a single node, from 16 Gflops obtained with the version without OpenACC, we reached 20 Gflops with the naïve OpenACC implementation. An optimized NekBone version leads to a 43 Gflop performance on a single node. In addition, we ported and optimized NekBone to parallel GPU systems, reaching a parallel efficiency of 79.9% on 1024 GPUs of the Titan XK7 supercomputer at the Oak Ridge National Laboratory.
Behavioral testing in perceptual or cognitive domains requires querying a subject multiple times in order to quantify his or her ability in the corresponding domain. These queries must be conducted sequentially, and any additional testing domains are also typically tested sequentially, such as with distinct tests comprising a test battery. As a result, existing behavioral tests are often lengthy and do not offer comprehensive evaluation. The use of active machine-learning kernel methods for behavioral assessment provides extremely flexible yet efficient estimation tools to more thoroughly investigate perceptual or cognitive processes without incurring the penalty of excessive testing time. Audiometry represents perhaps the simplest test case to demonstrate the utility of these techniques. In pure-tone audiometry, hearing is assessed in the two-dimensional input space of frequency and intensity, and the test is repeated for both ears. Although an individual's ears are not linked physiologically, they share many features in common that lead to correlations suitable for exploitation in testing. The bilateral audiogram estimates hearing thresholds in both ears simultaneously by conjoining their separate input domains into a single search space, which can be evaluated efficiently with modern machine-learning methods. The result is the introduction of the first conjoint psychometric function estimation procedure, which consistently delivers accurate results in significantly less time than sequential disjoint estimators.
Objectives: When one ear of an individual can hear significantly better than the other ear, evaluating the worse ear with loud probe tones may require delivering masking noise to the better ear in order to prevent the probe tones from inadvertently being heard by the better ear. Current masking protocols are confusing, laborious and time consuming. Adding a standardized masking protocol to an active machine learning audiogram procedure could potentially alleviate all of these drawbacks by dynamically adapting the masking as needed for each individual. The goal of this study is to determine the accuracy and efficiency of automated machine learning masking for obtaining true hearing thresholds.Design: Dynamically masked automated audiograms were collected for 29 participants between the ages of 21 and 83 (mean 43, SD 20) with a wide range of hearing abilities. Normal hearing listeners were given unmasked and masked machine learning audiogram tests. Listeners with hearing loss were given a standard audiogram test by an audiologist, with masking stimuli added as clinically determined, followed by a masked machine learning audiogram test. The hearing thresholds estimated for each pair of techniques were compared at standard audiogram frequencies (i.e., 0.25, 0.5, 1, 2, 4, 8 kHz).Results: Masked and unmasked machine learning audiogram threshold estimates matched each other well in normal hearing listeners, with a mean absolute difference between threshold estimates of 3.4 dB. Masked machine learning audiogram thresholds also matched well the thresholds determined by a conventional masking procedure, with a mean absolute difference between threshold estimates for listeners with low asymmetry and high asymmetry between the ears, respectively, of 4.9 dB and 2.6 dB. Notably, out of 6200 masked machine learning audiogram
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.