During the COVID-19 pandemic, numerous swab samples have been taken for SARS-CoV-2 reverse transcriptasepolymerase chain reaction (RT-PCR) testing. Nasopharyngeal sampling is considered safe, despite adjacent vital structures (eg, orbit, skull base, rich vasculature; Figure). However, single case reports [1][2][3][4] and clinical observations indicate the possibility of severe complications. This case series investigated the frequency and type of SARS-CoV-2 nasopharyngeal test complications.Methods | All patients presenting to the dedicated otorhinolaryngology emergency department
In rodents, the Robo1 gene regulates midline crossing of major nerve tracts, a fundamental property of the mammalian CNS. However, the neurodevelopmental function of the human ROBO1 gene remains unknown, apart from a suggested role in dyslexia. We therefore studied axonal crossing with a functional approach, based on magnetoencephalography, in 10 dyslexic individuals who all share the same rare, weakly expressing haplotype of the ROBO1 gene. Auditory-cortex responses were recorded separately to left-and right-ear sounds that were amplitude modulated at different frequencies. We found impaired interaural interaction that depended on the ROBO1 in a dosedependent manner. Our results indicate that normal crossing of the auditory pathways requires an adequate ROBO1 expression level.
Objectives:Auditory steady-state responses that can be elicited by various periodic sounds inform about subcortical and early cortical auditory processing. Steady-state responses to amplitude-modulated pure tones have been used to scrutinize binaural interaction by frequency-tagging the two ears’ inputs at different frequencies. Unlike pure tones, speech and music are physically very complex, as they include many frequency components, pauses, and large temporal variations. To examine the utility of magnetoencephalographic (MEG) steady-state fields (SSFs) in the study of early cortical processing of complex natural sounds, the authors tested the extent to which amplitude-modulated speech and music can elicit reliable SSFs.Design:MEG responses were recorded to 90-s-long binaural tones, speech, and music, amplitude-modulated at 41.1 Hz at four different depths (25, 50, 75, and 100%). The subjects were 11 healthy, normal-hearing adults. MEG signals were averaged in phase with the modulation frequency, and the sources of the resulting SSFs were modeled by current dipoles. After the MEG recording, intelligibility of the speech, musical quality of the music stimuli, naturalness of music and speech stimuli, and the perceived deterioration caused by the modulation were evaluated on visual analog scales.Results:The perceived quality of the stimuli decreased as a function of increasing modulation depth, more strongly for music than speech; yet, all subjects considered the speech intelligible even at the 100% modulation. SSFs were the strongest to tones and the weakest to speech stimuli; the amplitudes increased with increasing modulation depth for all stimuli. SSFs to tones were reliably detectable at all modulation depths (in all subjects in the right hemisphere, in 9 subjects in the left hemisphere) and to music stimuli at 50 to 100% depths, whereas speech usually elicited clear SSFs only at 100% depth.The hemispheric balance of SSFs was toward the right hemisphere for tones and speech, whereas SSFs to music showed no lateralization. In addition, the right lateralization of SSFs to the speech stimuli decreased with decreasing modulation depth.Conclusions:The results showed that SSFs can be reliably measured to amplitude-modulated natural sounds, with slightly different hemispheric lateralization for different carrier sounds. With speech stimuli, modulation at 100% depth is required, whereas for music the 75% or even 50% modulation depths provide a reasonable compromise between the signal-to-noise ratio of SSFs and sound quality or perceptual requirements. SSF recordings thus seem feasible for assessing the early cortical processing of natural sounds.
The coronavirus disease 2019 (COVID-19), first described in late 2019, is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). In COVID-19, mortality is mainly caused by acute respiratory failure, whereas morbidity has been described for all major organ systems. 1 Verified secondary bacterial infections in COVID-19 are rare, although antimicrobials are commonly used empirically. 2 A case report of a 60-year-old obese male patient with epiglottitis and subsequent positive SARS-CoV-2 RT-PCR was published by Fondaw et al 3 . This patient had initially presented with dyspnea and stridor and had to undergo emergency cricothyroidotomy for acute epiglottitis. The initial SARS-CoV-2 RT-PCR was negative, but on day two the chest X-ray showed signs consistent with COVID-19 pneumonitis and a repeat test confirmed COVID-19. The patient's condition improved, and he could be weaned off the ventilator on day seven.Here, we present a second case of likely COVID-19associated epiglottitis. | CASE REPORTA 29-year-old man without pre-existing medical conditions tested COVID-19 positive after having headache, fatigue, and mild rhinitis. Within a week, his COVID-19 symptoms improved. After an asymptomatic period of 12 days, the patient developed throat pain. He was referred to the Otorhinolaryngology-Head and Neck Surgery (ORL-HNS) Emergency Department at Helsinki University Hospital due to respiratory distress and muffled voice three weeks after the first symptoms associated with COVID-19 infection. His general health status was good, and vital signs were stable. Nasofiberoscopy showed a hyperemic epiglottis that was swollen asymmetrically. Yellowish, pus-like fluid existed in
The auditory octave illusion arises when dichotically presented tones, one octave apart, alternate rapidly between the ears. Most subjects perceive an illusory sequence of monaural tones: A high tone in the right ear (RE) alternates with a low tone, incorrectly localized to the left ear (LE). Behavioral studies suggest that the perceived pitch follows the RE input, and the perceived location the higher-frequency sound. To explore the link between the perceived pitches and brain-level interactions of dichotic tones, magnetoencephalographic responses were recorded to 4 binaural combinations of 2-min long continuous 400- and 800-Hz tones and to 4 monaural tones. Responses to LE and RE inputs were distinguished by frequency-tagging the ear-specific stimuli at different modulation frequencies. During dichotic presentation, ipsilateral LE tones elicited weaker and ipsilateral RE tones stronger responses than when both ears received the same tone. During the most paradoxical stimulus-high tone to LE and low tone to RE perceived as a low tone in LE during the illusion-also the contralateral responses to LE tones were diminished. The results demonstrate modified binaural interaction of dichotic tones one octave apart, suggesting that this interaction contributes to pitch perception during the octave illusion.
Bilateral cochlear implantation is increasing worldwide. In adults, bilateral cochlear implants (BICI) are often performed sequentially with a time delay between the first (CI1) and the second (CI2) implant. The benefits of BICI have been reported for well over a decade. This study aimed at investigating these benefits for a consecutive sample of adult patients. Improvements in speech-in-noise recognition after CI2 were followed up longitudinally for 12 months with the internationally comparable Finnish matrix sentence test. The test scores were statistically significantly better for BICI than for either CI alone in all assessments during the 12-month period. At the end of the follow-up period, the bilateral benefit for co-located speech and noise was 1.4 dB over CI1 and 1.7 dB over CI2, and when the noise was moved from the front to 90 degrees on the side, spatial release from masking amounted to an improvement of 2.5 dB in signal-to-noise ratio. To assess subjective improvements in hearing and in quality of life, two questionnaires were used. Both questionnaires revealed statistically significant improvements due to CI2 and BICI. The association between speech recognition in noise and background factors (duration of hearing loss/deafness, time between implants) or subjective improvements was markedly smaller than what has been previously reported on sequential BICI in adults. Despite the relatively heterogeneous sample, BICI improved hearing and quality of life.
To take a step towards real-life-like experimental setups, we simultaneously recorded magnetoencephalographic (MEG) signals and subject's gaze direction during audiovisual speech perception. The stimuli were utterances of /apa/ dubbed onto two side-by-side female faces articulating /apa/ (congruent) and /aka/ (incongruent) in synchrony, repeated once every 3 s. Subjects (N = 10) were free to decide which face they viewed, and responses were averaged to two categories according to the gaze direction. The right-hemisphere 100-ms response to the onset of the second vowel (N100m’) was a fifth smaller to incongruent than congruent stimuli. The results demonstrate the feasibility of realistic viewing conditions with gaze-based averaging of MEG signals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.