The paper proposes an identification procedure for autoregressive gaussian stationary stochastic processes wherein the manifest (or observed) variables are mostly related through a limited number of latent (or hidden) variables. The method exploits the sparse plus low-rank decomposition of the inverse of the manifest spectral density and the efficient convex relaxations recently proposed for such decomposition.
We consider a family of divergence-based minimax approaches to perform robust filtering. The mismodeling budget, or tolerance, is specified at each time increment of the model. More precisely, all possible model increments belong to a ball which is formed by placing a bound on the Tau-divergence family between the actual and the nominal model increment. Then, the robust filter is obtained by minimizing the mean square error according to the least favorable model in that ball. It turns out that the solution is a family of Kalman like filters. Their gain matrix is updated according to a risk sensitive like iteration where the risk sensitivity parameter is now time varying. As a consequence, we also extend the risk sensitive filter to a family of risk sensitive like filters according to the Tau-divergence family. Index TermsRobust Kalman filtering, Tau-divergence family, minimax problem, risk sensitive filtering.
IEEE Access\ud Volume 3, 2015, Article number 7217798, Pages 1512-1530\ud Open Access\ud Cognition-based networks: A new perspective on network optimization using learning and distributed intelligence (Article)\ud Zorzi, M.a , Zanella, A.a, Testolin, A.b, De Filippo De Grazia, M.b, Zorzi, M.bc \ud a Department of Information Engineering, University of Padua, Padua, Italy \ud b Department of General Psychology, University of Padua, Padua, Italy \ud c IRCCS San Camillo Foundation, Venice-Lido, Italy \ud View additional affiliations\ud View references (107)\ud Abstract\ud In response to the new challenges in the design and operation of communication networks, and taking inspiration from how living beings deal with complexity and scalability, in this paper we introduce an innovative system concept called COgnition-BAsed NETworkS (COBANETS). The proposed approach develops around the systematic application of advanced machine learning techniques and, in particular, unsupervised deep learning and probabilistic generative models for system-wide learning, modeling, optimization, and data representation. Moreover, in COBANETS, we propose to combine this learning architecture with the emerging network virtualization paradigms, which make it possible to actuate automatic optimization and reconfiguration strategies at the system level, thus fully unleashing the potential of the learning approach. Compared with the past and current research efforts in this area, the technical approach outlined in this paper is deeply interdisciplinary and more comprehensive, calling for the synergic combination of expertise of computer scientists, communications and networking engineers, and cognitive scientists, with the ultimate aim of breaking new ground through a profound rethinking of how the modern understanding of cognition can be used in the management and optimization of telecommunication network
In this paper, we extend the Beta divergence family to multivariate power spectral densities. Similarly to the scalar case, we show that it smoothly connects the multivariate Kullback-Leibler divergence with the multivariate Itakura-Saito distance. We successively study a spectrum approximation problem, based on the Beta divergence family, which is related to a multivariate extension of the THREE spectral estimation technique. It is then possible to characterize a family of solutions to the problem. An upper bound on the complexity of these solutions will also be provided. Finally, we will show that the most suitable solution of this family depends on the specific features required from the estimation problem
Brains at rest generate dynamical activity that is highly structured in space and time. We suggest that spontaneous activity, as in rest or dreaming, underlies top-down dynamics of generative models. During active tasks, generative models provide top-down predictive signals for perception, cognition, and action. When the brain is at rest and stimuli are weak or absent, top-down dynamics optimize the generative models for future interactions by maximizing the entropy of explanations and minimizing model complexity. Spontaneous fluctuations of correlated activity within and across brain regions may reflect transitions between "generic priors" of the generative model: low dimensional latent variables and connectivity patterns of the most common perceptual, motor, cognitive, and interoceptive states. Even at rest, brains are proactive and predictive.
Abstract-Structured covariances occurring in spectral analysis, filtering and identification need to be estimated from a finite observation record. The corresponding sample covariance usually fails to possess the required structure. This is the case, for instance, in the Byrnes-Georgiou-Lindquist THREE-like tunable, high-resolution spectral estimators. There, the output covariance 6 of a linear filter is needed to initialize the spectral estimation technique. The sample covariance estimate6, however, is usually not compatible with the filter. In this paper, we present a new, systematic way to overcome this difficulty. The new estimate 6 is obtained by solving an ancillary problem with an entropic-type criterion. Extensive scalar and multivariate simulation shows that this new approach consistently leads to a significant improvement of the spectral estimators performances.
Modeling and identification of high-dimensional stochastic processes is ubiquitous in many fields. In particular, there is a growing interest in modeling stochastic processes with simple and interpretable structures. In many applications, such as econometrics and biomedical sciences, it seems natural to describe each component of that stochastic process in terms of few factor variables, which are not accessible for observation, and possibly of few other components of the stochastic process. These relations can be encoded in graphical way via a structured dynamic network, referred to as "sparse plus low-rank (S+L) network" hereafter. The problem of finding the S+L network as well as the dynamic model can be posed as a system identification problem. In this paper, we introduce two new nonparametric methods to identify dynamic models for stochastic processes described by a S+L network. These methods take inspiration from regularized estimators based on recently introduced kernels (e.g. "stable spline", "tuned-correlated" etc.). Numerical examples show the benefit to introduce the S+L structure in the identification procedure.
This paper presents a semi-parametric algorithm for online learning of a robot inverse dynamics model. It combines the strength of the parametric and non-parametric modeling. The former exploits the rigid body dynamics equation, while the latter exploits a suitable kernel function. We provide an extensive comparison with other methods from the literature using real data from the iCub humanoid robot. In doing so we also compare two different techniques, namely cross validation and marginal likelihood optimization, for estimating the hyperparameters of the kernel function. †
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.