This study aims to evaluate medical student and intern awareness of ionising radiation exposure from common diagnostic imaging procedures and to suggest how education could be improved. Fourth to sixth year medical students enrolled at a Western Australian university and interns from three teaching hospitals in Perth were recruited. Participants were asked to complete a questionnaire consisting of 26 questions on their background, knowledge of ionising radiation doses and learning preferences for future teaching on this subject. A total of 331 completed questionnaires were received (95.9%). Of the 17 questions assessing knowledge of ionising radiation, a mean score of 6.0 was obtained by respondents (95% CI 5.8-6.2). Up to 54.8% of respondents underestimated the radiation dose from commonly requested radiological procedures. Respondents (11.3 and 25.5%) incorrectly believed that ultrasound and MRI emit ionising radiation, respectively. Of the four subgroups of respondents, the intern doctor subgroup performed significantly better (mean score 6.9, P < 0.0001, 95% CI 6.5-7.3) than each of the three medical student subgroups. When asked for the preferred method of teaching for future radiation awareness, a combination of lectures, tutorials and workshops was preferred. This study has clearly shown that awareness of ionising radiation from diagnostic imaging is lacking among senior medical students and interns. The results highlight the need for improved education to minimise unnecessary exposure of patients and the community to radiation. Further studies are required to determine the most effective form of education.
2The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) 3 data is of interest in brain computer interface and auditory perception research. The current 4 state-of-the-art approaches for decoding the attentional selection of listeners are based on 5 temporal response functions (TRFs). In the current context, a TRF is a function that facilitates a 6 mapping between features of sound streams and EEG responses. It has been shown that when 7 the envelope of attended speech and EEG responses are used to derive TRF mapping functions, 8 the TRF model predictions can be used to discriminate between attended and unattended talkers. 9 However, the predictive performance of the TRF models is dependent on how the TRF model 10 parameters are estimated. There exist a number of TRF estimation methods that have been 11 published, along with a variety of datasets. It is currently unclear if any of these methods perform 12 better than others, as they have not yet been compared side by side on a single standardized 13 dataset in a controlled fashion. Here, we present a comparative study of the ability of different TRF 14 estimation methods to classify attended speakers from multi-channel EEG data. The performance 15 of the TRF estimation methods is evaluated using different performance metrics on a set of 16 labeled EEG data from 18 subjects listening to mixtures of two speech streams. 17 Keywords: temporal response function, speech decoding, electroencephalography, selective auditory attention, attention decoding 18 21 1 Wong et al. Auditory Attention Decoding Method Comparisonsuccession of repeated short stimuli. More recently, these methods have been extended to continuous stimuli 22 such as speech by using linear stimulus-reponse models, broadly termed 'temporal response functions' 23 (TRFs). The TRF characterizes how a unit impulse in an input feature corresponds to a change in the 24 M/EEG data. TRFs can be used to generate continuous predictions about M/EEG responses or stimulus 25 features, as opposed to characterizing the response (ERP) to repetitions of the same stimuli. Importantly, it 26 has been demonstrated that the stimulus-response models can be extracted both from EEG responses to 27 artificial sound stimuli (16) but also from EEG responses to naturalistic speech (17). A number of studies 28 have considered mappings between the slowly varying temporal envelope of a speech sound signal (<10 29 Hz) and the corresponding filtered M/EEG response (16, 28, 11, 12). However, TRFs are not just limited to 30 the broadband envelope, but can also be obtained with the speech spectrogram (9, 10), phonemes (8), or 31 semantic features (4). This has opened new avenues of research into cortical responses to speech, advancing 32 the field beyond examining responses to repeated isolated segments of speech. 33TRF decoding methods have proven particularly apt for studying how the cortical processing of speech 34 features are modulated by selective auditory attention. A number of st...
1Perceptual processes can be probed by fitting stimulus-response models 2 that relate measured brain signals such as electroencephalography (EEG) to 3 the stimuli that evoke them. These models have also found application for 4 the control of devices such as hearing aids. The quality of the fit, as measured 5 by correlation, classification, or information rate metrics, indicates the value 6 of the model and the usefulness of the device. Models based on Canonical 7 Correlation Analysis (CCA) achieve a quality of fit that surpasses that of 8 commonly-used linear forward and backward models. Here, we show that 9 their performance can be further improved using several techniques that cap-10 ture the time-varying and context-dependent relationships within the data, 11 including adaptive beamforming, CCA weight optimization, and recurrent 12 neural networks that capture the time-varying and context-dependent rela-13 tionships within the data. We demonstrate these results using a match-vs-14 mismatch classification paradigm, in which the classifier must decide which 15 of two stimulus samples produced a given EEG response and which is a ran-16 domly chosen stimulus sample. This task captures the essential features of 17 the more complex auditory attention decoding (AAD) task explored in many 18 other studies. 20 In experiments that record brain responses to stimulation, stimulus-response models 21 are useful in providing insight into the components of the response. As these models 22 can provide information about auditory attention, they have also been put forward 23 for brain-computer interface (BCI) applications, such as the "cognitive" control of 24 a hearing aid [Wronkiewicz et al., 2016]. Previous studies have used linear system 25 identification techniques to either predict the response from the stimulus (forward 26 model) or else infer the stimulus from the response (backward model) [Lalor and 27 Foxe, 2010, Ding and Simon, 2012a,b, 2013, 2014. In addition to these, a third 28 form of model projects both stimulus and response into a common subspace via 29 weight matrices obtained using Canonical Correlation Analysis (CCA) [Hotelling, 30 1936, Dmochowski et al., 2017, de Cheveigné et al., 2018. As they are applicable to 31 responses to arbitrary stimuli, they allow the research to move beyond the standard 32 "evoked-response" paradigm that requires repeating the same short stimulus many 33 times [Ross et al., 2010]. The quality of the model can be quantified by calculating 34 the correlation coefficient between actual and predicted brain response (forward 35 model), or between the actual and inferred stimulus (backward model), or between 36 canonical correlate (CC) pairs (CCA). Higher correlation values indicate that the 37 model better captures the relation between stimulus and response. 38 Alternatively, the quality of a model can be quantified on the basis of its per-39 formance in a classification task, in terms of discriminability (d-prime) or percent 40 correct classification. This is ...
Brain signals recorded with electroencephalography (EEG), magnetoencephalography (MEG) and related techniques often have poor signal-to-noise ratio due to the presence of multiple competing sources and artifacts. A common remedy is to average over repeats of the same stimulus, but this is not applicable for temporally extended stimuli that are presented only once (speech, music, movies, natural sound). An alternative is to average responses over multiple subjects that were presented with the same identical stimuli, but differences in geometry of brain sources and sensors reduce the effectiveness of this solution. Multiway canonical correlation analysis (MCCA) brings a solution to this problem by allowing data from multiple subjects to be fused in such a way as to extract components common to all. This paper reviews the method, offers application examples that illustrate its effectiveness, and outlines the caveats and risks entailed by the method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.