The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2020
DOI: 10.48550/arxiv.2006.06932
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Personalized Demand Response via Shape-Constrained Online Learning

Abstract: This paper formalizes a demand response task as an optimization problem featuring a known time-varying engineering cost and an unknown (dis)comfort function. Based on this model, this paper develops a feedback-based projected gradient method to solve the demand response problem in an online fashion, where: i) feedback from the user is leveraged to learn the (dis)comfort function concurrently with the execution of the algorithm; and, ii) measurements of electrical quantities are used to estimate the gradient of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 19 publications
(41 reference statements)
0
3
0
Order By: Relevance
“…In the proposed personalized gradient tracking strategy, the dynamic gradient tracking update is interlaced with a learning mechanism to let each node learn the user's cost function U i (x), by employing noisy user's feedback in the form of a scalar quantity given by y i,t = U (x i,t ) + i,t , where x i,t is the local, tentative solution at time t and i,t is a noise term. It is worth pointing out that in this paper, we consider convex parametric models, instead of more generic non-parametric models, such as Gaussian Processes [12,[16][17][18][19][20][21], or convex regression [22,23]. The reasons for this choice stem from the fact that (i) user's functions are or can be often approximated as convex (see, e.g., [24,25] and references therein), which makes the overall optimization problem much easier to be solved; (ii) convex parametric models have better asymptotical rate bounds 2 than convex non-parametric models [22], which is fundamental when attempting at learning with scarce data; and (iii) a solid online theory already exists in the form of recursive least squares (RLS) [26][27][28][29][30][31].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…In the proposed personalized gradient tracking strategy, the dynamic gradient tracking update is interlaced with a learning mechanism to let each node learn the user's cost function U i (x), by employing noisy user's feedback in the form of a scalar quantity given by y i,t = U (x i,t ) + i,t , where x i,t is the local, tentative solution at time t and i,t is a noise term. It is worth pointing out that in this paper, we consider convex parametric models, instead of more generic non-parametric models, such as Gaussian Processes [12,[16][17][18][19][20][21], or convex regression [22,23]. The reasons for this choice stem from the fact that (i) user's functions are or can be often approximated as convex (see, e.g., [24,25] and references therein), which makes the overall optimization problem much easier to be solved; (ii) convex parametric models have better asymptotical rate bounds 2 than convex non-parametric models [22], which is fundamental when attempting at learning with scarce data; and (iii) a solid online theory already exists in the form of recursive least squares (RLS) [26][27][28][29][30][31].…”
Section: Introductionmentioning
confidence: 99%
“…As mentioned, letting user's utility functions enter in optimization problems to drive the systems to a "comfortable" solution is fundamental in data-driven control and optimization problems involving both humans and machines (see [33,34]). Further motivating examples include, e.g., optimizing the operations of a smart grid from engineering perspectives, while taking into account the user's preferences [21].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation