Abstract:This paper formalizes a demand response task as an optimization problem featuring a known time-varying engineering cost and an unknown (dis)comfort function. Based on this model, this paper develops a feedback-based projected gradient method to solve the demand response problem in an online fashion, where: i) feedback from the user is leveraged to learn the (dis)comfort function concurrently with the execution of the algorithm; and, ii) measurements of electrical quantities are used to estimate the gradient of… Show more
“…In the proposed personalized gradient tracking strategy, the dynamic gradient tracking update is interlaced with a learning mechanism to let each node learn the user's cost function U i (x), by employing noisy user's feedback in the form of a scalar quantity given by y i,t = U (x i,t ) + i,t , where x i,t is the local, tentative solution at time t and i,t is a noise term. It is worth pointing out that in this paper, we consider convex parametric models, instead of more generic non-parametric models, such as Gaussian Processes [12,[16][17][18][19][20][21], or convex regression [22,23]. The reasons for this choice stem from the fact that (i) user's functions are or can be often approximated as convex (see, e.g., [24,25] and references therein), which makes the overall optimization problem much easier to be solved; (ii) convex parametric models have better asymptotical rate bounds 2 than convex non-parametric models [22], which is fundamental when attempting at learning with scarce data; and (iii) a solid online theory already exists in the form of recursive least squares (RLS) [26][27][28][29][30][31].…”
Section: Introductionmentioning
confidence: 99%
“…As mentioned, letting user's utility functions enter in optimization problems to drive the systems to a "comfortable" solution is fundamental in data-driven control and optimization problems involving both humans and machines (see [33,34]). Further motivating examples include, e.g., optimizing the operations of a smart grid from engineering perspectives, while taking into account the user's preferences [21].…”
Section: Introductionmentioning
confidence: 99%
“…This represents a first step towards generic parametric models 3 . Non-parametric approaches in the literature to learn unknown functions are e.g., (shape-constrained) Gaussian processes [16,21,35] and convex regression [23,36]. As said, we prefer here parametric models for their faster asymptotical rates, cheap online computational load, and ease of introducing convexity constraints 4 .…”
We present a distributed optimization algorithm for solving online personalized optimization problems over a network of computing and communicating nodes, each of which linked to a specific user. The local objective functions are assumed to have a composite structure and to consist of a known time-varying (engineering) part and an unknown (userspecific) part. Regarding the unknown part, it is assumed to have a known parametric (e.g., quadratic) structure a priori, whose parameters are to be learned along with the evolution of the algorithm. The algorithm is composed of two intertwined components: (i) a dynamic gradient tracking scheme for finding local solution estimates and (ii) a recursive least squares scheme for estimating the unknown parameters via user's noisy feedback on the local solution estimates. The algorithm is shown to exhibit a bounded regret under suitable assumptions. Finally, a numerical example corroborates the theoretical analysis.
“…In the proposed personalized gradient tracking strategy, the dynamic gradient tracking update is interlaced with a learning mechanism to let each node learn the user's cost function U i (x), by employing noisy user's feedback in the form of a scalar quantity given by y i,t = U (x i,t ) + i,t , where x i,t is the local, tentative solution at time t and i,t is a noise term. It is worth pointing out that in this paper, we consider convex parametric models, instead of more generic non-parametric models, such as Gaussian Processes [12,[16][17][18][19][20][21], or convex regression [22,23]. The reasons for this choice stem from the fact that (i) user's functions are or can be often approximated as convex (see, e.g., [24,25] and references therein), which makes the overall optimization problem much easier to be solved; (ii) convex parametric models have better asymptotical rate bounds 2 than convex non-parametric models [22], which is fundamental when attempting at learning with scarce data; and (iii) a solid online theory already exists in the form of recursive least squares (RLS) [26][27][28][29][30][31].…”
Section: Introductionmentioning
confidence: 99%
“…As mentioned, letting user's utility functions enter in optimization problems to drive the systems to a "comfortable" solution is fundamental in data-driven control and optimization problems involving both humans and machines (see [33,34]). Further motivating examples include, e.g., optimizing the operations of a smart grid from engineering perspectives, while taking into account the user's preferences [21].…”
Section: Introductionmentioning
confidence: 99%
“…This represents a first step towards generic parametric models 3 . Non-parametric approaches in the literature to learn unknown functions are e.g., (shape-constrained) Gaussian processes [16,21,35] and convex regression [23,36]. As said, we prefer here parametric models for their faster asymptotical rates, cheap online computational load, and ease of introducing convexity constraints 4 .…”
We present a distributed optimization algorithm for solving online personalized optimization problems over a network of computing and communicating nodes, each of which linked to a specific user. The local objective functions are assumed to have a composite structure and to consist of a known time-varying (engineering) part and an unknown (userspecific) part. Regarding the unknown part, it is assumed to have a known parametric (e.g., quadratic) structure a priori, whose parameters are to be learned along with the evolution of the algorithm. The algorithm is composed of two intertwined components: (i) a dynamic gradient tracking scheme for finding local solution estimates and (ii) a recursive least squares scheme for estimating the unknown parameters via user's noisy feedback on the local solution estimates. The algorithm is shown to exhibit a bounded regret under suitable assumptions. Finally, a numerical example corroborates the theoretical analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.