2022
DOI: 10.1016/j.tele.2022.101815
|View full text |Cite
|
Sign up to set email alerts
|

Talking about facial recognition technology: How framing and context influence privacy concerns and support for prohibitive policy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 64 publications
1
5
0
Order By: Relevance
“…7 Additionally, empirical research has shown that FRT is bias against minority population, as it is less accurate for the faces of people between the ages of 18–30, particularly women and people of color (Klare et al, 2012 ). These findings are consistent with studies underpinning the argument that FRT disproportionately impacts minority groups and greatly impedes privacy (Buolamwini and Gebru, 2018 ; Grother et al, 2019 ; Shore, 2022 ). Within these concerns, empirical survey-based research has shown that individuals' acceptance to facial recognition varies given different political contexts and socio-demographic factors.…”
Section: Introductionsupporting
confidence: 86%
“…7 Additionally, empirical research has shown that FRT is bias against minority population, as it is less accurate for the faces of people between the ages of 18–30, particularly women and people of color (Klare et al, 2012 ). These findings are consistent with studies underpinning the argument that FRT disproportionately impacts minority groups and greatly impedes privacy (Buolamwini and Gebru, 2018 ; Grother et al, 2019 ; Shore, 2022 ). Within these concerns, empirical survey-based research has shown that individuals' acceptance to facial recognition varies given different political contexts and socio-demographic factors.…”
Section: Introductionsupporting
confidence: 86%
“…AIS operate within uncertain and unpredictable environments. Environmental dimensions include changing ethical expectations, like emerging public pressures relating to facial recognition [62,63], a developing regulatory environment exemplified by the recent release of the European Union AI Act [12], and an evolving cultural environment as the system is used in new countries around the world [64]. As a result, standard risk management approaches fall short.…”
Section: Extending Erm For Ai Ethical Risksmentioning
confidence: 99%
“…With respect to surveillance and privacy as key internet policy issues, the DPL framework suggests that commercial surveillance predicated on platform infrastructure requires regulatory frameworks informed by broader rights-based approaches to privacy (Smith, Shade, andShepherd 2017: 2795). These imperatives become particularly important in the context of facial recognition technologies that contribute to networked surveillance infrastructure that includes social media platforms, algorithmic sorting, and predictive policing (e.g., Kosta 2022;Shore 2022). Moreover, when dealing with the Canadian context as I do here, digital policy issues become further mired in the geopolitical dynamics of how large US tech firms come up against Canadian regulatory frameworks.…”
Section: Introductionmentioning
confidence: 97%