Coping with stress has shown to be able to avoid many complications in medical condition. In this paper we present an alternative method in analyzing and understanding stress using the four basic emotions of happy, calm, sad and fear as our basis function. Electroencephalogram (EEG) signals were captured from the scalp of the brain and measured in responds to various stimuli from the four basic emotions to stimulating stress base on the IAPS emotion stimuli. Features from the EEG signals were extracted using the Kernel Density Estimation (KDE) and classified using the Multilayer Perceptron (MLP), a neural network classifier to obtain accuracy of the subject's emotion leading to stress. Results have shown the potential of using the basic emotion basis function to visualize the stress perception as an alternative tool for engineers and psychologist.
In this paper, detailed studies have been conducted to model individual driving behavior in order to identify features that may be efficiently and effectively used to profile each driver. The brake and gas pedal pressure are used to identify uniqueness in driving maneuver o0f each driver. These differences in the driving habits could be due to the way our subconscious mind works and respond. In addition the switching between the subconscious to conscious mind will also produce unique respond on how the brain perform. Since the activation of movements are controlled by the cerebellum we propose the use of cerebellum model articulation controller (CMAC), introduced by Albus, to model each driver behavior. In this paper we only focus on using the gas pedal and brake pedal pressure of the driver to understand the driver behavior under difference environment. Experimental results from the CMAC profiles show the potential of extracting features of drivers' behavior for identification, verification, emotion recognition, stress and many other behavioral conditions.Keywords-Cerebellar Model Articulation Controller (CMAC), driver profiling, brake pedal signal, gas pedal signal,
This paper proposes an emotion recognition system using the electroencephalographic (EEG) signals. Both time domain and frequency domain approaches for feature extraction were evaluated using neural network (NN) and fuzzy neural network (FNN) as classifiers. Data was collected using psychological stimulation experiments. Three basic emotions namely; Angry, Happy, and Sad were selected for recognition with relax as an emotionless state. Both the time domain (based on statistical method) and frequency domain (based on MFCC) approaches shows potential to be used for emotion recognition using the EEG signals.
<span lang="EN">The rise of Internet access, social media and availability of smart phones intensify the epidemic of pornography addiction especially among younger teenagers. Such scenario may offer many side effects to the individual such as alteration of the behavior, changes in moral value and rejection to normal community convention. Hence, it is imperative to detect pornography addiction as early as possible. In this paper, a method of using brain signal from frontal area captured using EEG is proposed to detect whether the participant may have porn addiction or otherwise. It acts as a complementary approach to common psychological questionnaire. Experimental results show that the addicted participants had low alpha waves activity in the frontal brain region compared to non-addicted participants. It can be observed using power spectra computed using Low Resolution Electromagnetic Tomography (LORETA). The theta band also show there is disparity between addicted and non-addicted. However, the distinction is not as obvious as alpha band. Subsequently, more work need to be conducted to further test the validity of the hypothesis. It is envisaged that with more participants and further investigation, the proposed method will be the initial step to groundbreaking way of understanding the way porn addiction affects the brain.</span>
In this paper the speech emotion verification using two most popular methods in speech processing and analysis based on the Mel-Frequency Cepstral Coefficient (MFCC) and the Gaussian Mixture Model (GMM) were proposed and analyzed. In both cases, features for the speech emotion were extracted using the Short Time Fourier Transform (STFT) and Short Time Histogram (STH) for MFCC and GMM respectively. The performance of the speech emotion verification is measured based on three neural network (NN) and fuzzy neural network (FNN) architectures; namely: Multi Layer Perceptron (MLP), Adaptive Neuro Fuzzy Inference System (ANFIS) and Generic Self-organizing Fuzzy Neural Network (GenSoFNN). Results obtained from the experiments using real audio clips from movies and television sitcoms show the potential of using the proposed features extraction methods for real time application due to its reasonable accuracy and fast training time. This may lead us to the practical usage if the emotion verifier can be embedded in real time applications especially for personal digital assistance (PDA) or smart-phones.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.