2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton) 2016
DOI: 10.1109/allerton.2016.7852374
|View full text |Cite
|
Sign up to set email alerts
|

From behavior to sparse graphical games: Efficient recovery of equilibria

Abstract: In this paper we study the problem of exact recovery of the pure-strategy Nash equilibria (PSNE) set of a graphical game from noisy observations of joint actions of the players alone. We consider sparse linear influence games -a parametric class of graphical games with linear payoffs, and represented by directed graphs of n nodes (players) and in-degree of at most k. We present an 1 -regularized logistic regression based algorithm for recovering the PSNE set exactly, that is both computationally efficient -i.e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 8 publications
(16 citation statements)
references
References 9 publications
(21 reference statements)
0
16
0
Order By: Relevance
“…We would like to prove that β ijk " 0, @j R S i and β ijk ‰ 0, @j P S i . This gives us a straight forward way of picking in-neighbors of player i by solving optimization problem (10). We use an auxiliary variable w " ř j‰i ř r k"0 |β ijk | and prove the following lemma to get Karush-Kuhn-Tucker (KKT) conditions at the optimum.…”
Section: Sampling Mechanismmentioning
confidence: 99%
See 1 more Smart Citation
“…We would like to prove that β ijk " 0, @j R S i and β ijk ‰ 0, @j P S i . This gives us a straight forward way of picking in-neighbors of player i by solving optimization problem (10). We use an auxiliary variable w " ř j‰i ř r k"0 |β ijk | and prove the following lemma to get Karush-Kuhn-Tucker (KKT) conditions at the optimum.…”
Section: Sampling Mechanismmentioning
confidence: 99%
“…However, their method runs in exponential time and the authors assumed a specific observation model for the strategy profiles. For the same specific observation model, [10] proposed a polynomial time algorithm, based on 1 -regularized logistic regression, for learning linear influence games. Their strategy profiles (or joint actions) were drawn from a mixture of uniform distributions: one over the pure-strategy Nash equilibria (PSNE) set, and the other over its complement.…”
Section: Introductionmentioning
confidence: 99%
“…The above results pertain to a specific class of payoff functions with a particular parametric representation, that allows for a logistic regression approach. The results in [8] also assume strict positivity of the payoffs in the PSNE set. Thus, it is unclear how these results can be extended to general discrete actions.…”
Section: Contributionsmentioning
confidence: 99%
“…The binary-action models considered in [8,9,10] are a restricted subclass of the models that we consider here. The results in [8,9,10]…”
Section: Graphical Gamesmentioning
confidence: 99%
“…However, their method runs in exponential time and the authors assumed a specific observation model for the strategy profiles. For the same specific observation model, [15] proposed a polynomial time algorithm, based on ℓ 1 -regularized logistic regression, for learning linear influence games. Their strategy profiles (or joint actions) were drawn from a mixture of uniform distributions: one over the pure-strategy Nash equilibria (PSNE) set, and the other over its complement.…”
Section: Introductionmentioning
confidence: 99%