2013
DOI: 10.1080/18756891.2013.804143
|View full text |Cite
|
Sign up to set email alerts
|

User-Personality Classification Based on the Non-Verbal Cues from Spoken Conversations

Abstract: Technology that detects user personality based on user speech signals must be researched to enhance the function of interaction between a user and virtual agent that takes place through a speech interface. In this study, personality patterns were automatically classified as either extroverted or introverted. Personality patterns were recognized based on non-verbal cues such as the rate, energy, pitch, and silent intervals of speech with patterns of their change. Through experimentation, a maximum pattern class… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 18 publications
0
8
0
Order By: Relevance
“…Voice pitch has even been found to influence perceptions of attractiveness, with research finding that men rate woman with high-pitched voices as more attractive than those with low-pitched voices [24]. Research in the area of verbal communication has found that individuals can determine the personality traits of others with considerable accuracy, purely through patterns of speech, such as speed and voice pitch [25].…”
Section: Introductionmentioning
confidence: 99%
“…Voice pitch has even been found to influence perceptions of attractiveness, with research finding that men rate woman with high-pitched voices as more attractive than those with low-pitched voices [24]. Research in the area of verbal communication has found that individuals can determine the personality traits of others with considerable accuracy, purely through patterns of speech, such as speed and voice pitch [25].…”
Section: Introductionmentioning
confidence: 99%
“…Third, machine learning advances provide opportunities for more cost-efficient behavioral assessments because coding 42 behaviors across three (relatively short) exercises took about 1,000 hour of coding time. To this end, recent developments in the automatic extraction of facial characteristics (e.g., Baltrusaitis et al, 2018), body language (e.g., Biel et al, 2011;Nguyen et al, 2013), paralanguage (e.g., Biel et al, 2011;Kwon et al, 2013), or verbal content (e.g., Tausczik & Pennebaker, 2010) might be integrated into AC research.…”
Section: Directions For Future Researchmentioning
confidence: 99%
“…Amongst the most popular vocal features, we find pitch, energy, speech rate, first and second formant, cepstral, jitter, shimmer (An et al, 2016;J. Biel et al, 2011;Kwon et Zhao et al, 2015). Some earlier works used a mixture of handcrafted automatically extracted audio features and manually annotated visual features (Nguyen et al, 2013).…”
Section: Features Extractionmentioning
confidence: 99%
“…Support Vector Machines, Logistic Regression, Bayesian Networks, etc.) to discriminate traits scores higher or lower than average (Audhkhasi et al, 2012;Batrinca et al, 2011;Kwon et al, 2013;Mohammadi et al, 2010;Pianesi et al, 2008). However, binary classification tasks have started to decline in popularity following criticisms that average scores (which are the most common) were forcibly classified as high or low (Mariooryad & Busso, 2017;Phan & Rauthmann, 2021).…”
Section: Prediction Algorithmsmentioning
confidence: 99%