27th International Conference on Intelligent User Interfaces 2022
DOI: 10.1145/3490100.3516458
|View full text |Cite
|
Sign up to set email alerts
|

Neural Language Models as What If? -Engines for HCI Research

Abstract: Collecting data is one of the bottlenecks of Human-Computer Interaction (HCI) and user experience (UX) research. In this poster paper, we explore and critically evaluate the potential of large-scale neural language models like GPT-3 in generating synthetic research data such as participant responses to interview questions. We observe that in the best case, GPT-3 can create plausible reflections of video game experiences and emotions, and adapt its responses to given demographic information. Compared to real pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
15
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(17 citation statements)
references
References 12 publications
(14 reference statements)
1
15
1
Order By: Relevance
“…However, researchers also caution towards malicious uses, negative impacts, and reliability of LLM-generated content over its low cost and high speed [26,31]. Its efficiency is contrasted with the ethical boundaries of human augmentation or automation [15] for future applications.…”
Section: Autism Generative Ai and Llmsmentioning
confidence: 99%
“…However, researchers also caution towards malicious uses, negative impacts, and reliability of LLM-generated content over its low cost and high speed [26,31]. Its efficiency is contrasted with the ethical boundaries of human augmentation or automation [15] for future applications.…”
Section: Autism Generative Ai and Llmsmentioning
confidence: 99%
“…They use advanced machine learning techniques to answer the question 'What intervention(s) work, compared with what, how well?'. Another way artificial intelligence or particularly Large Language Models could be used is to simulate human-like responses and behavior 176,177 , as such techniques have the potential to mimic the response of diverse groups, and open new opportunities to test theories at an unprecedented speed. As Large Language Models can produce different experiences and perspectives, their usage offers the potential to explore the generalizability of interventions' effectiveness across different populations.…”
Section: Summary and Future Directionsmentioning
confidence: 99%
“…However, practical limitations such as training an NNLM from a heterogenous corpora, inference latency [26], and federated learning [41] remain challenging. More recently, large pre-trained Transformer LMs [4,8,23,37] have also been used to improve ASR [6,16,31,40], although a domain gap may exist [24]; and have also been shown to be effective for synthetic data generation [2,14].…”
Section: Beyond Irmentioning
confidence: 99%