2017
DOI: 10.1109/msmc.2017.2664478
|View full text |Cite
|
Sign up to set email alerts
|

Cybernetics of the Mind: Learning Individual's Perceptions Autonomously

Abstract: In this paper, we describe an approach to computational modelling and autonomous learning of the perception of sensory inputs by individuals. A hierarchical process of summarization of raw data with heterogeneous nature is proposed. At the lower layer of the hierarchy, the raw data autonomously forms semantically meaningful concepts.Instead of clustering based on the visual or audio similarity, the concepts are being formed at the second layer of the hierarchy based on the observed physiological variables (PVs… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
2
2
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…Since scientists do not agree on all the available models, research on ER in any applicative domain starts from the widely recognised model of Ekman, which we chose for our study. Recent research underlines that Ekman's primary emotional states, including happiness, sadness, anger, disgust and neutral [23] can be recognised based on text [24] and physiological clues such as heart rate, skin conductance, gestures, facial expression and sound, which can be managed with a multidimensional approach [11] and compared. Among all ER approaches, facial recognition is still predominant.…”
Section: Problem Description and Proposed Solutionmentioning
confidence: 99%
“…Since scientists do not agree on all the available models, research on ER in any applicative domain starts from the widely recognised model of Ekman, which we chose for our study. Recent research underlines that Ekman's primary emotional states, including happiness, sadness, anger, disgust and neutral [23] can be recognised based on text [24] and physiological clues such as heart rate, skin conductance, gestures, facial expression and sound, which can be managed with a multidimensional approach [11] and compared. Among all ER approaches, facial recognition is still predominant.…”
Section: Problem Description and Proposed Solutionmentioning
confidence: 99%
“…Since scientists do not agree on all the available models, research on ER in any applicative domain starts from the widely recognized model of Ekman, which we chose for our study. Recent research underlines that Ekman's primary emotional states, including happiness, sadness, anger, disgust, or neutral state [24] can be recognized based on text [25], physiological clues such as heart rate, skin conductance, gestures, facial expression, and sound, that can be managed with a multidimensional approach [8].…”
Section: Affective Computing and Emotion Recognitionmentioning
confidence: 99%
“…The process of recognition of moods and sentiments is mostly complex. Recent research underlines that primary emotional states such as happiness, sadness, anger, disgust, or neutral state [38] can be recognized based on text, [31] physiological clues such as heart rate, skin conductance and face expression, differently from sentiment, moods and affect, which are more complex states and be better managed with a multidimensional approach [39] [15]. Since Rosalind Picard defined the challenges for Affective Computing in 2003 [4], numerous advances have been made in the task of emotion recognition, such as defining collective influence of emotions expressed online [9], stating that emotional expressiveness is the crucial fuel that sustains communities; studying cultural aspects of emotions in art [16] and its variations; create emotionally engaging experiences in games [33], where affective changes are crucial to the conscious experience of the world around us.…”
Section: Affective Computing and Emotion Recognitionmentioning
confidence: 99%