2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC) 2012
DOI: 10.1109/icsmc.2012.6378020
|View full text |Cite
|
Sign up to set email alerts
|

Inverse reinforcement learning for decentralized non-cooperative multiagent systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
23
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(24 citation statements)
references
References 7 publications
0
23
0
Order By: Relevance
“…Despite the fact that most of the classical IRL techniques focus on learning reward functions for single-agent problems, recent proposals have begun to adapt IRL to learn multiagent reward functions. Supposing that the agents follow a Nash Equilibrium, Reddy et al (2012) propose a method for approximating the reward functions for all agents by computing them in a distributed manner. Natarajan et al (2010) apply IRL with a different purpose.…”
Section: Inverse Reinforcement Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Despite the fact that most of the classical IRL techniques focus on learning reward functions for single-agent problems, recent proposals have begun to adapt IRL to learn multiagent reward functions. Supposing that the agents follow a Nash Equilibrium, Reddy et al (2012) propose a method for approximating the reward functions for all agents by computing them in a distributed manner. Natarajan et al (2010) apply IRL with a different purpose.…”
Section: Inverse Reinforcement Learningmentioning
confidence: 99%
“…Yet, IRL is a promising line of research, as training autonomous agents without explicit reward functions might be required for the development of general-purpose robots that receive instructions from laypeople. Despite these two issues, the recent trend on multiagent IRL might inspire new methods for influencing a group of agents to assume a collaborative behavior (Natarajan et al, 2010;Reddy et al, 2012;Lin et al, 2018).…”
Section: Inverse Reinforcement Learningmentioning
confidence: 99%
“…Multiple algorithms have been proposed for inverse reinforcement learning in multi-agent settings [1], [2], [11], [14], [15], [16], [17], [18], [19]. Both [14] and [15] extend the single-agent IRL algorithm of [20] to the multi-agent setting.…”
Section: Related Workmentioning
confidence: 99%
“…Multiple algorithms have been proposed for inverse reinforcement learning in multi-agent settings [1], [2], [11], [14], [15], [16], [17], [18], [19]. Both [14] and [15] extend the single-agent IRL algorithm of [20] to the multi-agent setting. [14] assumes that the problem can be described in terms of a centralized controller and a weighted cooperative reward function.…”
Section: Related Workmentioning
confidence: 99%
“…Before introducing our algorithms, we review several existing approaches to MIRL and related problems. The first of these is a decentralized MIRL (d-MIRL) algorithm developed by Reddy et al [29]. This algorithm is decentralized in the sense that it infers agents' rewards one by one, rather than all at once.…”
Section: Conventional Mirl Approachesmentioning
confidence: 99%