Dynamical systems are pervasive in almost all engineering and scientific applications. Simulating such systems is computationally very intensive. Hence, Model Order Reduction (MOR) is used to reduce them to a lower dimension. Most of the MOR algorithms require solving large sparse sequences of linear systems. Since using direct methods for solving such systems does not scale well in time with respect to the increase in the input dimension, efficient preconditioned iterative methods are commonly used. In one of our previous works, we have shown substantial improvements by reusing preconditioners for the parametric MOR (Singh et al. 2019). Here, we had proposed techniques for both, the non-parametric and the parametric cases, but had applied them only to the latter. We have three main contributions here. First, we demonstrate that preconditioners can be reused more effectively in the non-parametric case as compared to the parametric one. Second, we show that reusing preconditioners is an art via detailed algorithmic implementations in multiple MOR algorithms. Third and final, we demonstrate that reusing preconditioners for reducing a real-life industrial problem (of size 1.2 million), leads to relative savings of up to 64% in the total computation time (in absolute terms a saving of 5 days).
There exist many classes of algorithms for computing reduced-order models of parametric dynamical systems, commonly termed as parametric model order reduction algorithms. The main computational cost of these algorithms is in solving sequences of very large and sparse linear systems of equations, which are predominantly dependent on slowly varying parameter values. We focus on efficiently solving these linear systems, arising while reducing secondorder linear dynamical systems, by iterative methods with appropriate preconditioners. We propose that the choice of underlying iterative solver is problem dependent. Since for many parametric model order reduction algorithms, the linear systems right-hand-sides are available together, we propose the use of block variant of the underlying iterative method.Due to constant increase in the input model size and the number of parameters in it, computing a preconditioner in a parallel setting is increasingly becoming a norm. Since, Sparse Approximate Inverse (SPAI) preconditioner is a general preconditioner that can be naturally parallelized, we propose its use. Our most novel contribution is a technique to cheaply update the SPAI preconditioner, while solving the parametrically changing linear systems. We support our proposed theory by numerical experiments where we first show that using a block variant of the underlying iterative solver saves 80% of the computation time over the non-block version. Further, and more importantly, SPAI with updates saves 70% of the time over SPAI without updates.
Clustering large amount of data is becoming increasingly important in the current times. Due to the large sizes of data, clustering algorithm often take too much time. Sampling this data before clustering is commonly used to reduce this time. In this work, we propose a probabilistic sampling technique called cube sampling along with K-Prototype clustering. Cube sampling is used because of its accurate sample selection. K-Prototype is most frequently used clustering algorithm when the data is numerical as well as categorical (very common in today's time). The novelty of this work is in obtaining the crucial inclusion probabilities for cube sampling using Principal Component Analysis (PCA).Experiments on multiple datasets from the UCI repository demonstrate that cube sampled K-Prototype algorithm gives the best clustering accuracy among similarly sampled other popular clustering algorithms (K-Means, Hierarchical Clustering (HC), Spectral Clustering (SC)). When compared with unsampled K-Prototype, K-Means, HC and SC, it still has the best accuracy with the added advantage of reduced computational complexity (due to reduced data size).
Clustering large amount of data is becoming increasingly important in the current times. Due to the large sizes of data, clustering algorithm often take too much time. Sampling this data before clustering is commonly used to reduce this time. In this work, we propose a probabilistic sampling technique called cube sampling along with K-Prototype clustering. Cube sampling is used because of its accurate sample selection. K-Prototype is most frequently used clustering algorithm when the data is numerical as well as categorical (very common in today's time). The novelty of this work is in obtaining the crucial inclusion probabilities for cube sampling using Principal Component Analysis (PCA).Experiments on multiple datasets from the UCI repository demonstrate that cube sampled K-Prototype algorithm gives the best clustering accuracy among similarly sampled other popular clustering algorithms (K-Means, Hierarchical Clustering (HC), Spectral Clustering (SC)). When compared with unsampled K-Prototype, K-Means, HC and SC, it still has the best accuracy with the added advantage of reduced computational complexity (due to reduced data size).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.