2023
DOI: 10.1109/tac.2023.3238856
|View full text |Cite
|
Sign up to set email alerts
|

From Data to Reduced-Order Models via Generalized Balanced Truncation

Abstract: This article proposes a data-driven model reduction approach on the basis of noisy data with a known noise model. Firstl, the concept of data reduction is introduced. In particular, we show that the set of reduced-order models obtained by applying a Petrov-Galerkin projection to all systems explaining the data characterized in a largedimensional quadratic matrix inequality (QMI) can again be characterized in a lower-dimensional QMI. Next, we develop a data-driven generalized balanced truncation method that rel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 39 publications
0
4
0
Order By: Relevance
“…In order to more intuitively show the advantages of the GI-SMOTE algorithm in synthesizing new samples, we visualize the characteristics of data distribution after applying no-sampling, SMOTE, and GI-SMOTE to three different datasets. Since the attribute of the dataset itself is high-dimensional, the principal component analysis (PCA) [23,24] technique is used as a dimension reduction tool to reduce the distribution of these datasets from high-dimensional to two-dimensional [25]. The distributions of these samples are shown in Figure 2, where orange dots represent the minority samples, blue dots represent the majority samples, and green dots represent the synthetic samples.…”
Section: Visual Presentationmentioning
confidence: 99%
“…In order to more intuitively show the advantages of the GI-SMOTE algorithm in synthesizing new samples, we visualize the characteristics of data distribution after applying no-sampling, SMOTE, and GI-SMOTE to three different datasets. Since the attribute of the dataset itself is high-dimensional, the principal component analysis (PCA) [23,24] technique is used as a dimension reduction tool to reduce the distribution of these datasets from high-dimensional to two-dimensional [25]. The distributions of these samples are shown in Figure 2, where orange dots represent the minority samples, blue dots represent the majority samples, and green dots represent the synthetic samples.…”
Section: Visual Presentationmentioning
confidence: 99%
“…Then, the system ( , , , ) A B C D can be seen to satisfy the dissipation inequality (90) if and only if…”
Section: Definition 8: Informativity Of Noisy Datamentioning
confidence: 99%
“…The subset of R n n # consisting of all symmetric matrices is denoted by S n . For vectors x and y, we denote x y State feedback stabilization E-IS [68], [70], [71], and [72] Deadbeat controller E-IS [68] LQR E-IS [68] Suboptimal LQR E-IS [73] Suboptimal H2 E-IS [73] Synchronization E-IS [74] Robust MPC E-IS [75] Dynamic feedback stabilization E-ISO [68] Dynamic feedback stabilization E-IO [68] and [76] Dissipativity E-ISO [77] Tracking and regulation E-IS [78] Model reduction (moment matching) E-IO [79] Reachability (conic constraints) E-IO [80] Stability N-S [81] Stabilizability N-IS [81] State feedback stabilization N-IS [70], [72], and [82] Stabilization for switched system N-IS [83], [84], and [85] Control with input saturation N-IS [86] Control of positive systems N-IS [87] State feedback H2 control N-IS [72] and [82] Dynamic feedback H2 control N-IO [88] State feedback H3 control N-IS [72] and [82] Dynamic feedback H3 control N-IO [88] Stability N-IO [89] Dynamic feedback stabilization N-IO [88] and [89] Dissipativity N-ISO [77] Model reduction (balancing) N-ISO [90] Structural properties N-ISO [91] Absolute stabilization Luré systems N-ISO [72...…”
mentioning
confidence: 99%
“…Therefore, it is crucial to simplify models of dynamical systems by exploring lower-order models that approximate the original high-order ones according to certain criteria [14,15]. Various model reduction methods, such as balanced truncation [16], Hankel-norm reduction [17], and moment matching [18] have been developed.…”
Section: Introductionmentioning
confidence: 99%