2005
DOI: 10.1007/11595014_57
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Select Negotiation Strategies in Multi-agent Meeting Scheduling

Abstract: In this paper we look at the Multi-Agent Meeting Scheduling problem where distributed agents negotiate meeting times on behalf of their users. While many negotiation approaches have been proposed for scheduling meetings it is not well understood how agents can negotiate strategically in order to maximize their users' utility. To negotiate strategically an agent needs to learn to pick good strategies for each agent. We show how the playbook approach introduced by (Bowling, Browning, & Veloso 2004) for team pl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
11
0

Year Published

2006
2006
2021
2021

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(11 citation statements)
references
References 10 publications
0
11
0
Order By: Relevance
“…Finally, there have been recent advances in collaborative multiagent learning between distributed sites related to our proposed work. For instance, the idea of using a playbook to select different rules or strategies and reinforcing these rules/strategies with different weights based on their performances, is proposed in [29]. However, while the playbook proposed in [29] is problem specific, we envision a broader set of rules capable of selecting optimization algorithms with inherent analytical properties leading to utility maximization of not only stream processing but also distributed systems in general.…”
Section: Markov Decision Process Versus Rules-based Decisionmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, there have been recent advances in collaborative multiagent learning between distributed sites related to our proposed work. For instance, the idea of using a playbook to select different rules or strategies and reinforcing these rules/strategies with different weights based on their performances, is proposed in [29]. However, while the playbook proposed in [29] is problem specific, we envision a broader set of rules capable of selecting optimization algorithms with inherent analytical properties leading to utility maximization of not only stream processing but also distributed systems in general.…”
Section: Markov Decision Process Versus Rules-based Decisionmentioning
confidence: 99%
“…For instance, the idea of using a playbook to select different rules or strategies and reinforcing these rules/strategies with different weights based on their performances, is proposed in [29]. However, while the playbook proposed in [29] is problem specific, we envision a broader set of rules capable of selecting optimization algorithms with inherent analytical properties leading to utility maximization of not only stream processing but also distributed systems in general. Furthermore, our aim is to construct a purely automated framework for both information gathering and distributed decision making, without requiring supervision, as supervision may not be possible across autonomous sites or can lead to high operational costs.…”
Section: Markov Decision Process Versus Rules-based Decisionmentioning
confidence: 99%
“…Example control applications are 2D cursor control [15], planar mobile robots [4], and discrete control of 4 DOF robot arms [8,3]. Bitzer and van der Smagt [1] have performed high-DOF robot hand control by reducing the DOFs to a discrete set of poses that can be indexed through kernel-based classification.…”
Section: Introductionmentioning
confidence: 99%
“…Specifically for neural decoding, efforts to decode these user neural activity into control signals have demonstrated success limited to 2-3 DOFs with bandwidth around 15 bits/sec [7]. With such limited bandwidth, control applications have focused on low-DOF systems, such as 2D cursor control [8], planar mobile robots [1], and discrete control of 4 DOF robot arms [7], [9]. Additionally, Bitzer and van der Smagt [4] have performed high-DOF robot hand control by reducing the DOFs to a discrete set of pose that can be indexed by through kernelbased classification.…”
Section: Introductionmentioning
confidence: 99%