2021
DOI: 10.1080/0960085x.2021.1927213
|View full text |Cite
|
Sign up to set email alerts
|

The dark sides of people analytics: reviewing the perils for organisations and employees

Abstract: Technological advances in the field of artificial intelligence (AI) are heralding a new era of analytics and data-driven decision-making. Organisations increasingly rely on people analytics to optimise human resource management practices in areas such as recruitment, performance evaluation, personnel development, health and retention management. Recent progress in the field of AI and ever-increasing volumes of digital data have raised expectations and contributed to a very positive image of people analytics. H… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
70
1

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 86 publications
(73 citation statements)
references
References 132 publications
(477 reference statements)
0
70
1
Order By: Relevance
“…This finding corresponds with prior studies showing that individuals seek to avoid situations in which they may harm others, because decision delegation reduces decision makers’ mental costs of feeling responsible ( Steffel et al, 2016 ). These psychological and moral factors relating to accountability are important predictors of algorithm aversion ( Giermindl et al, 2021 ). For instance, experimental research by Newman et al (2020) shows that people perceive algorithm-made HR decisions as less fair, a finding that was stable irrespective of whether employees were selected for promotion or layoff.…”
Section: Theorymentioning
confidence: 99%
See 1 more Smart Citation
“…This finding corresponds with prior studies showing that individuals seek to avoid situations in which they may harm others, because decision delegation reduces decision makers’ mental costs of feeling responsible ( Steffel et al, 2016 ). These psychological and moral factors relating to accountability are important predictors of algorithm aversion ( Giermindl et al, 2021 ). For instance, experimental research by Newman et al (2020) shows that people perceive algorithm-made HR decisions as less fair, a finding that was stable irrespective of whether employees were selected for promotion or layoff.…”
Section: Theorymentioning
confidence: 99%
“…Other key players in personnel information systems such as IBM, SAP, and Oracle use integrated application tools to accumulate HR data from existing databases ( Angrave et al, 2016 ). Typical case studies of firms using algorithms strategically to enhance the efficiency of their talent management are, among many others, the tech giants Google ( People Analytics ; Shrivastava et al, 2018 ) and Microsoft ( MyAnalytics ; Giermindl et al, 2021 ), the bank ING ( Peeters et al, 2020 ), the cybersecurity firm Juniper Networks ( Boudreau and Rice, 2015 ), the retailer Wal-Mart ( Haube, 2015 ), and online retailer Zalando using the software Zonar ( Staab and Geschke, 2020 ).…”
Section: Introductionmentioning
confidence: 99%
“…Analyzing existing HR data initially uncovers thatdespite of the widespread narrative of ever-increasing data volumes (e.g. Van der Togt and Rasmussen, 2017;Giermindl et al, 2021) there are numerous cases not having big data. Starting with data volume, only half of the respective sets of configurations show larger volumes.…”
Section: Results Of Mvqcamentioning
confidence: 99%
“…Against the backdrop of an increasing interest and relevance it comes, however, as a certain surprise that this technological dimension of HRA success is hardly considered so far. A systematic evaluation of existing narrative reviews of HRA literature (Chalutz Ben-Gal, 2019;Garcia-Arroyo and Osca, 2021;Giermindl et al, 2021;Margherita, 2022;Marler and Boudreau, 2017;Qamar and Samad, 2022;Tursunbayeva et al, 2018) concurringly uncovers a general scarcity of empirical research on HRA success. Only few practitioner (e.g.…”
mentioning
confidence: 99%
“…One of the best-known real-world examples is the case of Amazon in 2018, where a tested AI software systematically discriminated against women in the hiring process [ 3 ]. Various researchers, therefore, have warned of the significant risk these tools’ unknown flaws, such as algorithmic bias [ 4 ], pose to organizations implementing new forms of AI in their human resources (HR) processes. Similarly, several philosophers [e.g., 5 ] have condemned the use of AI in recruitment, denying that AI could possess the social and empathetic skills needed in the selection process.…”
Section: Introductionmentioning
confidence: 99%