Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining 2022
DOI: 10.1145/3488560.3498440
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Mixed-initiative Conversational Search Systems via User Simulation

Abstract: Clarifying the underlying user information need by asking clarifying questions is an important feature of modern conversational search system. However, evaluation of such systems through answering prompted clarifying questions requires significant human effort, which can be time-consuming and expensive. In this paper, we propose a conversational User Simulator, called USi, for automatic evaluation of such conversational search systems. Given a description of an information need, USi is capable of automatically… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 34 publications
(12 citation statements)
references
References 55 publications
0
6
0
Order By: Relevance
“…User simulation has been widely leveraged in the past for training the dialogue state tracking component of conversational agents using reinforcement learning algorithms, either via agenda-based or model-based simulation [19]. The highly interactive nature of conversational information access systems has also sparked renewed interest in evaluation using user simulation within the IR community [4,5,23,36,38,53]. Recently, Zhang and Balog [53] proposed a general framework for evaluating conversational recommender systems using user simulation.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…User simulation has been widely leveraged in the past for training the dialogue state tracking component of conversational agents using reinforcement learning algorithms, either via agenda-based or model-based simulation [19]. The highly interactive nature of conversational information access systems has also sparked renewed interest in evaluation using user simulation within the IR community [4,5,23,36,38,53]. Recently, Zhang and Balog [53] proposed a general framework for evaluating conversational recommender systems using user simulation.…”
Section: Discussionmentioning
confidence: 99%
“…To bridge the gap, Salle et al [36] focus on behavioral factors like cooperativeness and patience to build a more human-like simulator for information-seeking conversations. Sekulic et al [38] take this further by enabling simulated users to ask clarifying questions in a mixed-initiative setting. Sun et al [40] studied how to simulate user satisfaction in a human-like way for task-oriented tasks.…”
Section: Discussionmentioning
confidence: 99%
“…An area related to CRS is conversational search systems [74,80]. Research work in this area tend to focus on resolving ambiguity in natural language [12,59,79], on open-domain large-scale document retrieval [11,29,60,61], and often do not to utilize historic user-item interactions [28,40,67]. Conversational recommendation systems consider both users' current request and historical interactions.…”
Section: Related Work 21 Conversational Recommendation Systemsmentioning
confidence: 99%
“…Then, there are simulators based on deep neural networks, e.g., [8,10,11]. Last, we identify simulators based on large language models, e.g., [6,12,15,16]. Commonly, simulators are developed to interact with task-oriented dialogue systems such as CIA agents.…”
Section: Introductionmentioning
confidence: 99%