The aim of the current study was to investigate subtle characteristics of social perception and interpretation in high-functioning individuals with autism spectrum disorders (ASDs), and to study the relation between watching and interpreting. As a novelty, we used an approach that combined moment-by-moment eye tracking and verbal assessment. Sixteen young adults with ASD and 16 neurotypical control participants watched a video depicting a complex communication situation while their eye movements were tracked. The participants also completed a verbal task with questions related to the pragmatic content of the video. We compared verbal task scores and eye movements between groups, and assessed correlations between task performance and eye movements. Individuals with ASD had more difficulty than the controls in interpreting the video, and during two short moments there were significant group differences in eye movements. Additionally, we found significant correlations between verbal task scores and moment-level eye movement in the ASD group, but not among the controls. We concluded that participants with ASD had slight difficulties in understanding the pragmatic content of the video stimulus and attending to social cues, and that the connection between pragmatic understanding and eye movements was more pronounced for participants with ASD than for neurotypical participants.
Increasing concentrations of anesthetics in the blood induce a continuum of neurophysiological changes, which reflect on the electroencephalogram (EEG). EEG-based depth of anesthesia assessment requires that the signal samples are correctly associated with the neurophysiological changes occurring at different anesthetic levels. A novel method is presented to estimate the phase of the continuum using the feature data extracted from EEG. The feature data calculated from EEG sequences corresponding to continuously deepening anesthesia are considered to form a one-dimensional nonlinear manifold in the multidimensional feature space. Utilizing a recently proposed algorithm, Isomap, the dimensionality of the feature data is reduced to achieve a one-dimensional embedding representing this manifold and thereby the continuum of neurophysiological changes during induction of anesthesia. The Isomap-based estimation is validated with data recorded from nine patients during induction of propofol anesthesia. The proposed method provides a novel approach to assess neurophysiological changes during anesthesia and offers potential for the development of more advanced systems for the depth of anesthesia monitoring.
In this paper, experiments on the automatic discrimination of basic emotions from spoken Finnish are described. For the purpose of the study, a large emotional speech corpus of Finnish was collected; 14 professional actors acted as speakers, and simulated four primary emotions when reading out a semantically neutral text. More than 40 prosodic features were derived and automatically computed from the speech samples. Two application scenarios were tested: the first scenario was speaker-independent for a small domain of speakers while the second scenario was completely speaker-independent. Human listening experiments were conducted to assess the perceptual adequacy of the emotional speech samples. Statistical classification experiments indicated that, with the optimal combination of prosodic feature vectors, automatic emotion discrimination performance close to human emotion recognition ability was achievable.
Asperger's syndrome (AS) belongs to the group of autism spectrum disorders and is characterized by deficits in social interaction, as manifested e.g. by the lack of social or emotional reciprocity. The disturbance causes clinically significant impairment in social interaction. Abnormal prosody has been frequently identified as a core feature of AS. There are virtually no studies on recognition of basic emotions from speech. This study focuses on how adolescents with AS (n=12) and their typically developed controls (n=15) recognize the basic emotions happy, sad, angry, and 'neutral' from speech prosody. Adolescents with AS recognized basic emotions from speech prosody as well as their typically developed controls did. Possibly the recognition of basic emotions develops during the childhood.
Fundamental frequency (F₀) and intensity are known to be important variables in the communication of emotions in speech. In singing, however, pitch is predetermined and yet the voice should convey emotions. Hence, other vocal parameters are needed to express emotions. This study investigated the role of voice source characteristics and formant frequencies in the communication of emotions in monopitched vowel samples [a:], [i:] and [u:]. Student actors (5 males, 8 females) produced the emotional samples simulating joy, tenderness, sadness, anger and a neutral emotional state. Equivalent sound level (Leq), alpha ratio [SPL (1–5 kHz) – SPL (50 Hz–1 kHz)] and formant frequencies F1–F4 were measured. The [a:] samples were inverse filtered and the estimated glottal flows were parameterized with the normalized amplitude quotient [NAQ = fAC/(dpeakT)]. Interrelations of acoustic variables were studied by ANCOVA, considering the valence and psychophysiological activity of the expressions. Forty participants listened to the randomized samples (n = 210) for identification of the emotions. The capacity of monopitched vowels for conveying emotions differed. Leq and NAQ differentiated activity levels. NAQ also varied independently of Leq. In [a:], filter (formant frequencies F1–F4) was related to valence. The interplay between voice source and F1–F4 warrants a synthesis study.
General anesthesia is usually induced with a combination of drugs. In addition to the hypnotic agent, such as propofol, opioids are often used due to their synergistic hypnotic and analgesic properties. However, the effects of opioids on the EEG changes and the clinical state of the patient during anesthesia are complex and hinder the interpretation of the EEG-based depth of anesthesia indexes. In this paper, a novel technology for separating the anesthetic effects of propofol and an ultrashort-acting opioid, remifentanil, using the spectral features of EEG is proposed. By applying a floating search method, a well-performing feature set is achieved to estimate the effects of propofol during induction of anesthesia and to classify whether or not remifentanil has been coadministered. It is shown that including the detection of the presence of opioids to the estimated effect of propofol significantly improves the determination of the clinical state of the patient, i.e., if the patient will respond to a painful stimulation.
In a recent study, we proposed a novel method to evaluate hypoxic ischemic encephalopathy (HIE) by assessing propofolinduced changes in the 19-channel electroencephalogram (EEG). The study suggested that patients with HIE are unable to generate EEG slow waves during propofol anesthesia 48 h after cardiac arrest (CA). Since a low number of electrodes would make the method clinically more practical, we now investigated whether our results received with a full EEG cap could be reproduced using only forehead electrodes. Experimental data from comatose post-CA patients (N = 10) were used. EEG was recorded approximately 48 h after CA using 19-channel EEG cap during a controlled propofol exposure. The slow wave activity was calculated separately for all electrodes and four forehead electrodes (Fp1, Fp2, F7, and F8) by determining the low-frequency (< 1 Hz) power of the EEG. HIE was defined by following the patients' recovery for six months. In patients without HIE (N = 6), propofol substantially increased (244 ± 91%, mean ± SD) the slow wave activity in forehead electrodes, whereas the patients with HIE (N = 4) were unable to produce such activity. The results received with forehead electrodes were similar to those of the full EEG cap. With the experimental pilot study data, the forehead electrodes were as capable as the full EEG cap in capturing the effect of HIE on propofol-induced slow wave activity. The finding offers potential in developing a clinically practical method for the early detection of HIE.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.