Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems 2023
DOI: 10.1145/3544548.3580652
|View full text |Cite
|
Sign up to set email alerts
|

Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience

Abstract: The rise of powerful large language models (LLMs) brings about tremendous opportunities for innovation but also looming risks for individuals and society at large. We have reached a pivotal moment for ensuring that LLMs and LLM-infused applications are developed and deployed responsibly. However, a central pillar of responsible AI-transparency-is largely missing from the current discourse around LLMs. It is paramount to pursue new approaches to provide transparency for LLMs, and years of research at the inters… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(14 citation statements)
references
References 202 publications
(216 reference statements)
0
14
0
Order By: Relevance
“…Prior research notes that stakeholders with little to no background in data science or AI are rarely involved in problem selection and formulation, if involved at all [29,39,55,67]. There is a knowledge gap between data science and domain expertise [74,132,140]: Domain experts and designers struggle to understand what AI can do, they often envision AI services that cannot be built [35,78,135,142]. Data scientists find it challenging to elicit needs from domain experts, and without this input, they tend to envision AI services that users and impacted stakeholders do not want [74,81,88,96].…”
Section: Broadening Participation In Ai Designmentioning
confidence: 99%
“…Prior research notes that stakeholders with little to no background in data science or AI are rarely involved in problem selection and formulation, if involved at all [29,39,55,67]. There is a knowledge gap between data science and domain expertise [74,132,140]: Domain experts and designers struggle to understand what AI can do, they often envision AI services that cannot be built [35,78,135,142]. Data scientists find it challenging to elicit needs from domain experts, and without this input, they tend to envision AI services that users and impacted stakeholders do not want [74,81,88,96].…”
Section: Broadening Participation In Ai Designmentioning
confidence: 99%
“…Prior work noted that stakeholders with little to no background in data science or AI (e.g., domain experts, UX designers, policymakers, etc) might be involved in the design of an AI system's user interface, but rarely in conversations around the objective of the underlying model or the overall problem formulation [39,41,109,133,144]. Recently, a growing body of work in HCI and AI has called for human-centered approaches for broadening participation in AI design to meaningfully engage domain stakeholders to brainstorm and reflect on whether an envisioned future technology is in fact addressing the right problem in the first place [10,34,35,40,70,78,151].…”
Section: Designing Ai With Domain Stakeholdersmentioning
confidence: 99%
“…For example, some experimental work is being explored by integrating LLMs directly into feature-rich applications, such as Copilot in Microsoft 365 [43] and Firefly in Adobe [1]. This process has shown that developers can face new challenges in ensuring accurate and effective use of this new avenue of conversational UX experience [29,52]. As many of these interfaces are still at a nascent stage, it is unclear how this current practice (i.e., integrating the context of these feature-rich applications) can help novice end-users in seeking accurate and relevant assistance from LLMs.…”
Section: Llm Use For Task-based Assistancementioning
confidence: 99%
“…Unlike traditional help-seeking mediums that rely on keyword matching, prompt-based interactions within LLMs offer humanlike language capabilities [29], which is unique, but can also be unreliable. This unreliability comes from the biases (e.g., hallucinating and non-deterministic output) inherent within prompt-based interactions of LLMs.…”
Section: Prompt-based Interactionsmentioning
confidence: 99%
See 1 more Smart Citation