2023
DOI: 10.1145/3587157
|View full text |Cite
|
Sign up to set email alerts
|

What’s (Not) Working in Programmer User Studies?

Abstract: A key goal of software engineering research is to improve the environments, tools, languages, and techniques programmers use to efficiently create quality software. Successfully designing these tools and demonstrating their effectiveness involves engaging with tool users — software engineers. Researchers often want to conduct user studies of software engineers to collect direct evidence. However, running user studies can be difficult, and researchers may lack solution strategies to overcome the barriers, so th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 94 publications
0
3
0
Order By: Relevance
“…Echoing others [e.g., 85,155], Ledo et al [125] reiterate that evaluation methods for a specific toolkit should be determined in light of the claims researchers wish to make about the toolkit. 5 As we explore further in Sections 5.2 and 5.3, we agree with the call to align evaluation methods with research claims, and draw on HCI guidelines for evaluation design that have been developed in response to this call [e.g., 53,146] to inform an RAI tool effectiveness evaluation framework. As RAI tools aim to address ethical issues in AI development, claims related to the consequences of their use or effectiveness are wide in their sociopolitical scope, implicating not only RAI tool developers and users, but also AI system stakeholders and communities who may be affected by AI system deployment.…”
Section: Tool Evaluations In Hcimentioning
confidence: 89%
See 2 more Smart Citations
“…Echoing others [e.g., 85,155], Ledo et al [125] reiterate that evaluation methods for a specific toolkit should be determined in light of the claims researchers wish to make about the toolkit. 5 As we explore further in Sections 5.2 and 5.3, we agree with the call to align evaluation methods with research claims, and draw on HCI guidelines for evaluation design that have been developed in response to this call [e.g., 53,146] to inform an RAI tool effectiveness evaluation framework. As RAI tools aim to address ethical issues in AI development, claims related to the consequences of their use or effectiveness are wide in their sociopolitical scope, implicating not only RAI tool developers and users, but also AI system stakeholders and communities who may be affected by AI system deployment.…”
Section: Tool Evaluations In Hcimentioning
confidence: 89%
“…The objective of an effectiveness evaluation framework is to support evaluators of RAI tools to design robust evaluations, commensurate to similar efforts in the HCI and software engineering fields to develop frameworks for usability evaluations [e.g., 53,118], evaluations of ML models [103], and the design of empirical studies in software engineering [e.g., 116]. Below we provide initial design desiderata for the development of a framework that meets this objective.…”
Section: Design Desiderata For An Effectiveness Evaluation Frameworkmentioning
confidence: 99%
See 1 more Smart Citation