Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems 2023
DOI: 10.1145/3544548.3581347
|View full text |Cite
|
Sign up to set email alerts
|

“That’s important, but...”: How Computer Science Researchers Anticipate Unintended Consequences of Their Research Innovations

Abstract: Computer science research has led to many breakthrough innovations but has also been scrutinized for enabling technology that has negative, unintended consequences for society. Given the increasing discussions of ethics in the news and among researchers, we interviewed 20 researchers in various CS sub-disciplines to identify whether and how they consider potential unintended consequences of their research innovations. We show that considering unintended consequences is generally seen as important but rarely pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 111 publications
0
4
0
Order By: Relevance
“…Prior work analyzed these broader impacts statements, finding convergence around a set of topics such as risks to privacy and bias, but often lacking concrete specifics or strategies for mitigation [8,99,127,167]. However, prior work suggests that many CS researchers may not have the training, resources, or inclination to engage in this type of anticipatory work [45,175], suggesting that new tools, training, and processes, are needed to support researchers and developers in engaging in anticipatory work in ways that are integrated into their research practices. More recently, researchers have proposed a framework that uses LLMs to anticipate harms for classifiers by generating stakeholders and vignettes for a given scenario [24], evaluating this framework through interviews with responsible AI researchers.…”
Section: Related Work 21 Anticipating Technology's Negative Impactsmentioning
confidence: 99%
See 2 more Smart Citations
“…Prior work analyzed these broader impacts statements, finding convergence around a set of topics such as risks to privacy and bias, but often lacking concrete specifics or strategies for mitigation [8,99,127,167]. However, prior work suggests that many CS researchers may not have the training, resources, or inclination to engage in this type of anticipatory work [45,175], suggesting that new tools, training, and processes, are needed to support researchers and developers in engaging in anticipatory work in ways that are integrated into their research practices. More recently, researchers have proposed a framework that uses LLMs to anticipate harms for classifiers by generating stakeholders and vignettes for a given scenario [24], evaluating this framework through interviews with responsible AI researchers.…”
Section: Related Work 21 Anticipating Technology's Negative Impactsmentioning
confidence: 99%
“…Guide users in imagining use cases. Existing research highlights the challenges faced by ML practitioners when attempting to anticipate the uses of their ML-powered applications and how different individuals or groups may be affected [20,45,103,171]. Confirming this, software engineer U6 noted "You don't really know how your tool could be used, so it's really hard to envision what harms would be."…”
Section: Design Goalsmentioning
confidence: 99%
See 1 more Smart Citation
“…They ask for tools that fit into their resource constraints, as ML practices outside of big tech may not have the bandwidth to carry out some of the responsible AI investigations of larger companies [29]. Finally, technologists require structures for discussing and reflecting on the ethical implications of their work [10,16,24,28]. Conflicts between values themselves and between different professionals will arise naturally, so to address this organisations will need to "(...) mobilize resources to create safe spaces and encourage explicit disagreements among practitioners positively [and] enable them to constantly question RAI values (...)" [59, p.13].…”
Section: Related Work 21 Current Responsible Ai and Its Shortcomingsmentioning
confidence: 99%