Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems 2016
DOI: 10.1145/2858036.2858498
|View full text |Cite
|
Sign up to set email alerts
|

Local Standards for Sample Size at CHI

Abstract: We describe the primary ways researchers can determine the size of a sample of research participants, present the benefits and drawbacks of each of those methods, and focus on improving one method that could be useful to the CHI community: local standards. To determine local standards for sample size within the CHI community, we conducted an analysis of all manuscripts published at CHI2014. We find that sample size for manuscripts published at CHI ranges from 1-916,000 and the most common sample size is 12. We… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

6
139
1
1

Year Published

2016
2016
2020
2020

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 361 publications
(166 citation statements)
references
References 24 publications
6
139
1
1
Order By: Relevance
“…In practice, the effect remains difficult to study as the small effect size requires large participant pools to reliably detect the effect. Such large participant pools are rather uncommon in HCI [12] with the exception of crowdsourced online experiments where the reduced experimental control might negatively effect the signal to noise ratio of an already small effect. Besides such practical considerations, the very large effect on (dis)comfort severely limits the range of acceptable expansive interfaces.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In practice, the effect remains difficult to study as the small effect size requires large participant pools to reliably detect the effect. Such large participant pools are rather uncommon in HCI [12] with the exception of crowdsourced online experiments where the reduced experimental control might negatively effect the signal to noise ratio of an already small effect. Besides such practical considerations, the very large effect on (dis)comfort severely limits the range of acceptable expansive interfaces.…”
Section: Methodsmentioning
confidence: 99%
“…The analysis that brought forward this finding was exploratory, and our experiment included only 80 participants -more than usual inperson experiments in HCI [12] but less than the failed replications of explicitly elicited power poses. We suggest that replications could focus on specific, promising or important application areas where effects in different directions might have an either desirable or detrimental impact on people's lives, and participants should be screened for relevant personality traits, such as impulsiveness or the "the big-five" [33], to examine interaction effects with these covariates.…”
Section: Need For Replicationmentioning
confidence: 99%
“…The user experience questionnaire 2 (UEQ) is used in our experiment. Users' burden was reported as a high negative factor leading to users' quitting.…”
Section: Methodsmentioning
confidence: 99%
“…We conduct the experiment with 20 participants [2]. Before the experiment, we introduce the experiment to the candidates by saying it is to promote daily exercise to promote healthy behavior.…”
Section: Participantsmentioning
confidence: 99%
“…These could be patterned after (or even use) badges from the Open Science Framework (OSF) [1]  Voluntary pre-registration of analyses for papers  MOOCs or other courses for authors and reviewers  Suggested graduate curricula for HCI PhDs  Adding new metadata in PCS, such as experimental meta-data similar to that collected manually by Caine [2] to facilitate tracking changes in statistical practice in the field over time.…”
mentioning
confidence: 99%