2018 IEEE Conference on Decision and Control (CDC) 2018
DOI: 10.1109/cdc.2018.8619211
|View full text |Cite
|
Sign up to set email alerts
|

Near-Optimal Control Strategy in Leader-Follower Networks: A Case Study for Linear Quadratic Mean-Field Teams

Abstract: In this paper, a decentralized stochastic control system consisting of one leader and many homogeneous followers is studied. The leader and followers are coupled in both dynamics and cost, where the dynamics are linear and the cost function is quadratic in the states and actions of the leader and followers. The objective of the leader and followers is to reach consensus while minimizing their communication and energy costs. The leader knows its local state and each follower knows its local state and the state … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
11
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
7
1

Relationship

5
3

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 20 publications
0
11
0
Order By: Relevance
“…However, these differences do not add much complexity to the convergence proof because the Riccati equations (5) do not depend on the number of followers according to Assumption 3. Hence, the rate of convergence with respect to the number of followers is 1/n, similar to [10,Theorem 2]. This leads to the following theorem.…”
Section: Assumptionmentioning
confidence: 52%
See 2 more Smart Citations
“…However, these differences do not add much complexity to the convergence proof because the Riccati equations (5) do not depend on the number of followers according to Assumption 3. Hence, the rate of convergence with respect to the number of followers is 1/n, similar to [10,Theorem 2]. This leads to the following theorem.…”
Section: Assumptionmentioning
confidence: 52%
“…Since the dynamic equations ( 1) and ( 2), cost function ( 3) and the saddle-point strategies ( 6), ( 7) and ( 8) are bounded and continuous in xt , it results that ( 8) is an approximate saddle-point strategy. The reader is referred to [10] for a detailed proof in the context of optimal control, which is similar, to a great extent, to the convergence proof of MinMax control problem considered in this subsection, but note that the Riccati equations here are different and the relative errors defined in [10] are of intermittent nature. However, these differences do not add much complexity to the convergence proof because the Riccati equations (5) do not depend on the number of followers according to Assumption 3.…”
Section: Assumptionmentioning
confidence: 98%
See 1 more Smart Citation
“…Furthermore, we demonstrate in [9] that today's most common feed-forward deep neural networks (i.e., those with rectified linear unit activation function) may be viewed as a special case of deep structured teams, where layers are time steps and neurons are simple integrator agents whose goal is to collaborate in order to minimize a common loss (cost) function. For more applications of deep structured models, the reader is referred to reinforcement learning [8], [13], [14], nonzerosum game [12], [15], minmax optimization [17], leaderfollowers [18], [19], epidemics [20], smart grids [21], meanfield teams [22]- [25], and networked estimation [26], [27].…”
Section: Introductionmentioning
confidence: 99%
“…In addition, we raise a practical question that when the solution of an infinite-population network constructs a meaningful approximation for the finitepopulation one. Inspired by existing techniques for meanfield teams [19], [20], [21], [22], [23], [24], we first compute the optimal solution of a finite-population network for the case where the empirical distribution of infected nodes is observable. Next, we derive an infinite-population Bellman equation that requires no observation of infected nodes, and identify a stability condition under which the solution of the infinite-population network constitutes a near-optimal solution for the finite-population one.…”
Section: Introductionmentioning
confidence: 99%