2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE) 2021
DOI: 10.1109/icse43902.2021.00057
|View full text |Cite
|
Sign up to set email alerts
|

Do you Really Code? Designing and Evaluating Screening Questions for Online Surveys with Programmers

Abstract: Recruiting professional programmers in sufficient numbers for research studies can be challenging because they often cannot spare the time, or due to their geographical distribution and potentially the cost involved. Online platforms such as Clickworker or Qualtrics do provide options to recruit participants with programming skill; however, misunderstandings and fraud can be an issue. This can result in participants without programming skill taking part in studies and surveys. If these participants are not det… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 40 publications
(19 citation statements)
references
References 38 publications
0
19
0
Order By: Relevance
“…To do that, we used a one-minute timeboxed three-question multiple-choice survey. Danilova et al developed a list of the most compelling question to ask in such pilot studies; we strongly recommend using those [3].…”
Section: Selection Processmentioning
confidence: 99%
See 1 more Smart Citation
“…To do that, we used a one-minute timeboxed three-question multiple-choice survey. Danilova et al developed a list of the most compelling question to ask in such pilot studies; we strongly recommend using those [3].…”
Section: Selection Processmentioning
confidence: 99%
“…In this exposition, we will not consider construct validity issues 2. Prolific is an academic data collection platform: www.prolific.co 3. www.qualtrics.com.…”
mentioning
confidence: 99%
“…Especially the number of professional reversers is rather small and becomes even smaller when only considering the ones willing to spend time on a user survey. Additionally, recent research indicates that even popular paid services claiming to provide qualified survey participants can and should not be used to overcome this limitation [9]. Consequently, like previous studies with a similar topic [31], [47], we have to work with a relatively limited number of participants compared to less-technical user surveys.…”
Section: Survey Limitations and Threats To Validitymentioning
confidence: 99%
“…To ensure the quality of the data, we added competence screening questions and random attention checks. The competence screening part included questions from Danilova et al [78] and aimed to filter out people who do not meet the software engineering knowledge requirement. Attention checks were randomly inserted between questions to ensure the reliability of the answers.…”
Section: Survey Data Collectionmentioning
confidence: 99%