2018
DOI: 10.1109/taffc.2016.2614299
|View full text |Cite
|
Sign up to set email alerts
|

Automated Analysis and Prediction of Job Interview Performance

Abstract: We present a computational framework for automatically quantifying verbal and nonverbal behaviors in the context of job interviews. The proposed framework is trained by analyzing the videos of 138 interview sessions with 69 internship-seeking undergraduates at the Massachusetts Institute of Technology (MIT). Our automated analysis includes facial expressions (e.g., smiles, head gestures, facial tracking points), language (e.g., word counts, topic modeling), and prosodic information (e.g., pitch, intonation, an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
70
0
3

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 95 publications
(79 citation statements)
references
References 43 publications
1
70
0
3
Order By: Relevance
“…In particular, the tool should include all aspects of highly automated interviews: acquire information, analyze information, select and decide about potential actions, and implement these actions (Hoff & Bashir, ; Parasuraman et al, ) . Furthermore, we followed the general idea underlying highly automated approaches as we decided to highlight the importance of verbal, paraverbal, and nonverbal behavior information (see Naim et al, ; Schmid Mast et al, ). We used the approach of Langer and colleagues () who used a highly automated interview tool to create the video for the highly automated interview conditions.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…In particular, the tool should include all aspects of highly automated interviews: acquire information, analyze information, select and decide about potential actions, and implement these actions (Hoff & Bashir, ; Parasuraman et al, ) . Furthermore, we followed the general idea underlying highly automated approaches as we decided to highlight the importance of verbal, paraverbal, and nonverbal behavior information (see Naim et al, ; Schmid Mast et al, ). We used the approach of Langer and colleagues () who used a highly automated interview tool to create the video for the highly automated interview conditions.…”
Section: Methodsmentioning
confidence: 99%
“…At lower levels of automation, this might not be different from automatically building scores by averaging the evaluation of different interviewers (Bobko, Roth, & Buster, ; see also Nolan et al, ). At higher levels of automation, this could mean that machine learning algorithms were trained on past data of successful and unsuccessful applicants to learn what distinguishes them (Chamorro‐Premuzic et al, ; Naim et al, ). This way, algorithms learn to automatically evaluate new interviewees regarding their interview performance and might even provide an overall score for the interviewees that could either serve as a recommendation for hiring managers (cf.…”
Section: Background and Hypotheses Developmentmentioning
confidence: 99%
See 1 more Smart Citation
“…This corpus consists of face-to-face job interviews for a marketing short assignment whose candidates are mainly students. There are video corpora of face-to-face mock interviews that include two corpora built at the Massachusetts Institute of Technology Naim et al 2018), and a corpus of students in services related to hospitality (Muralidhar et al 2016). Many corpora of simulated asynchronous video interviews have also been built: a corpus of employees (Chen et al 2016), a corpus of students from Bangalore University and a corpus collected through the use of crowdsourcing tools .…”
Section: Related Work Databasesmentioning
confidence: 99%
“…Real open position Number of candidates (Nguyen et al 2014) Face to Face Marketing short assignment 36 (Muralidhar et al 2016) Face to Face None 169 (Naim et al 2018) Face to Face None 138 (Chen et al 2016) Asynchronous Video None 36 (Rasipuram, Rao, and Jayagopi 2017) Asynchronous Video None 106 Asynchronous Video None 100 (Rupasinghe et al 2017) Asynchronous Video None 36 Asynchronous Video None 260 This Study Asynchronous Video Sales positions 7095 Table 1: Summary of job interview databases on important moments during dyadic conversations (Yu et al 2017). Finally, numerous models have been proposed to model the interactions between modalities in emotion detection tasks through attention mechanisms (Zadeh et al 2017;).…”
Section: Interviewmentioning
confidence: 99%