2017 17th International Conference on Control, Automation and Systems (ICCAS) 2017
DOI: 10.23919/iccas.2017.8204459
|View full text |Cite
|
Sign up to set email alerts
|

Preliminary test of affective virtual reality scenes with head mount display for emotion elicitation experiment

Abstract: Emotion elicitation experiments are conducted to collect biological signals from a subject who is in a state of emotion. The recorded signals are used as training/test dataset for constructing an emotion recognition system by means of machine learning. In conventional emotion elicitation experiments, affective images or videos were provided for a subject to draw out an emotion from them. However, the authors have concerns about the effectiveness. To surely evoke a specific emotion from subjects, we have produc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 10 publications
0
3
0
Order By: Relevance
“…Of these, the best performing classifier for intrasubject classification was achieved by RF (98.20%) by Kumaran et al [ 93 ] on music stimuli while the best for intersubject classification was achieved by DGCNN (90.40%) by Song et al [ 110 ] using video stimulations from SEED and DREAMER datasets. As for VR stimuli, only Hidaka et al [ 116 ] performed using SVM (81.33%) but using only five subjects to evaluate its performance, which is considered to be very low when the number of subjects at minimal is expected to be 30 to be justifiable as mentioned by Alarcao and Fonseca [ 22 ].…”
Section: Examining Previous Studiesmentioning
confidence: 99%
“…Of these, the best performing classifier for intrasubject classification was achieved by RF (98.20%) by Kumaran et al [ 93 ] on music stimuli while the best for intersubject classification was achieved by DGCNN (90.40%) by Song et al [ 110 ] using video stimulations from SEED and DREAMER datasets. As for VR stimuli, only Hidaka et al [ 116 ] performed using SVM (81.33%) but using only five subjects to evaluate its performance, which is considered to be very low when the number of subjects at minimal is expected to be 30 to be justifiable as mentioned by Alarcao and Fonseca [ 22 ].…”
Section: Examining Previous Studiesmentioning
confidence: 99%
“…Particularly focused on arousal and valence affective response, previous studies have used the EEG classification approach in the context of using music stimuli [66], music videos [6,16,83,96,100,101], and video clips [48,56,58,68,88,93]. The immersive virtual reality (VR)'s ability to evoke emotion gives VR more interest to be used as a tool in emotion detection in general [28,33,35,36,47,105]. In fact, the self-reported intensity of emotion was found to be significantly greater in immersive VR compared to similar content in non-immersive virtual environments [10].…”
Section: Emotion Detection Based On Bci-vrmentioning
confidence: 99%
“…Neural Network [31][32][33] 85.80% Support Vector Machine [34][35][36] 77.80% K-Nearest Neighbor [33,37,38] 88.94% Multi-layer Perceptron [38][39][40] 78.16% Bayes [41][42][43] 69.62% Extreme Learning Machine [41] 87.10% K-Means [43] 78.06% Linear Discriminant Analysis [42] 71.30% Gaussian Process [44] 71.30% To improve on the performance of the SOA methods, we used the features generated by M3GP in Figure 15. This kind of transfer learning was used in [45] with success and the best training transformation found in M3GP was used to transform the dataset into a new one (M3GP tree in Section 4.2), considering that these new features contain more information to simplify the learning process of the SOA methods.…”
Section: Classifier Average Performancementioning
confidence: 99%