In this paper, we propose an online clustering method called Contrastive Clustering (CC) which explicitly performs the instance- and cluster-level contrastive learning. To be specific, for a given dataset, the positive and negative instance pairs are constructed through data augmentations and then projected into a feature space. Therein, the instance- and cluster-level contrastive learning are respectively conducted in the row and column space by maximizing the similarities of positive pairs while minimizing those of negative ones. Our key observation is that the rows of the feature matrix could be regarded as soft labels of instances, and accordingly the columns could be further regarded as cluster representations. By simultaneously optimizing the instance- and cluster-level contrastive loss, the model jointly learns representations and cluster assignments in an end-to-end manner. Besides, the proposed method could timely compute the cluster assignment for each individual, even when the data is presented in streams. Extensive experimental results show that CC remarkably outperforms 17 competitive clustering methods on six challenging image benchmarks. In particular, CC achieves an NMI of 0.705 (0.431) on the CIFAR-10 (CIFAR-100) dataset, which is an up to 19% (39%) performance improvement compared with the best baseline. The code is available at https://github.com/XLearning-SCU/2021-AAAI-CC.
We develop a new estimator of the inverse covariance matrix for high-dimensional multivariate normal data using the horseshoe prior. The proposed graphical horseshoe estimator has attractive properties compared to other popular estimators, such as the graphical lasso and the graphical smoothly clipped absolute deviation. The most prominent benefit is that when the true inverse covariance matrix is sparse, the graphical horseshoe provides estimates with small information divergence from the sampling model. The posterior mean under the graphical horseshoe prior can also be almost unbiased under certain conditions. In addition to these theoretical results, we also provide a full Gibbs sampler for implementing our estimator. MATLAB code is available for download from github at http://github.com/liyf1988/GHS. The graphical horseshoe estimator compares favorably to existing techniques in simulations and in a human gene network data analysis.
Bergenin, isolated from the herb of Saxifraga stolonifera Curt. (Hu-Er-Cao), has anti-inflammatory, antitussive and wound healing activities. The aim of the present study was to identify the effect of bergenin on experimental colitis, and explored the related mechanisms. Our results showed that oral administration of bergenin remarkably alleviated disease symptoms of mice with dextran sulfate sodium (DSS)-induced colitis, evidenced by reduced DAI scores, shortening of colon length, MPO activity and pathologic abnormalities in colons. Bergenin obviously inhibited the mRNA and protein expressions of IL-6 and TNF-α in colon tissues, but not that of mucosal barrier-associated proteins occludin, E-cadherin and MUC-2. In vitro, bergenin significantly inhibited the expressions of IL-6 and TNF-α as well as nuclear translocation and DNA binding activity of NF-κB-p65 in lipopolysaccharide (LPS)-stimulated peritoneal macrophages and RAW264.7 cells, which was almost reversed by addition of PPARγ antagonist GW9662 and siPPARγ. Subsequently, bergenin was identified as a PPARγ agonist. It could enter into macrophages, bind with PPARγ, promote nuclear translocation and transcriptional activity of PPARγ, and increase mRNA expressions of CD36, LPL and ap2. In addition, bergenin significantly up-regulated expression of SIRT1, inhibited acetylation of NF-κB-p65 and increased association NF-κB-p65 and IκBα. Finally, the correlation between activation of PPARγ and attenuation of colitis, inhibition of IL-6 and TNF-α expressions, NF-κB-p65 acetylation and nuclear translocation, and up-regulation of SIRT1 expression by bergenin was validated in mice with DSS-induced colitis and/or LPS-stimulated macrophages. In summary, bergenin could ameliorate colitis in mice through inhibiting the activation of macrophages via regulating PPARγ/SIRT1/NF-κB-p65 pathway. The findings can provide evidence for the further development of bergenin as an anti-UC drug, and offer a paradigm for the recognization of anti-UC mechanisms of compound with similar structure occurring in traditional Chinese medicines.
Robust multi-view learning with incomplete information has received significant attention due to issues such as incomplete correspondences and incomplete instances that commonly affect real-world multi-view applications. Existing approaches heavily rely on paired samples to realign or impute defective ones, but such preconditions cannot always be satisfied in practice due to the complexity of data collection and transmission. To address this problem, we present a novel framework called SeMantic Invariance LEarning (SMILE) for multi-view clustering with incomplete information that does not require any paired samples. To be specific, we discover the existence of invariant semantic distribution across different views, which enables SMILE to alleviate the cross-view discrepancy to learn consensus semantics without requiring any paired samples. The resulting consensus semantics remains unaffected by cross-view distribution shifts, making them useful for realigning/imputing defective instances and forming clusters. We demonstrate the effectiveness of SMILE through extensive comparison experiments with 13 state-of-the-art baselines on five benchmarks. Our approach improves the clustering accuracy of NoisyMNIST from 19.3%/23.2% to 82.7%/69.0% when the correspondences/instances are fully incomplete. We will release the code after acceptance.
Codeword stabilized (CWS) codes are, in general, non-additive quantum codes that can correct errors by an exhaustive search of different error patterns, similar to the way that we decode classical non-linear codes. For an n-qubit quantum code correcting errors on up to t qubits, this brute-force approach consecutively tests different errors of weight t or less, and employs a separate n-qubit measurement in each test. In this paper, we suggest an error grouping technique that allows to simultaneously test large groups of errors in a single measurement. This structured error recovery technique exponentially reduces the number of measurements by about 3 t times. While it still leaves exponentially many measurements for a generic CWS code, the technique is equivalent to syndrome-based recovery for the special case of additive CWS codes.
Since the advent of the horseshoe priors for regularization, global-local shrinkage methods have proved to be a fertile ground for the development of Bayesian methodology in machine learning, specifically for high-dimensional regression and classification problems. They have achieved remarkable success in computation, and enjoy strong theoretical support. Most of the existing literature has focused on the linear Gaussian case; see Bhadra et al. (2019b) for a systematic survey. The purpose of the current article is to demonstrate that the horseshoe regularization is useful far more broadly, by reviewing both methodological and computational developments in complex models that are more relevant to machine learning applications. Specifically, we focus on methodological challenges in horseshoe regularization in nonlinear and non-Gaussian models; multivariate models; and deep neural networks. We also outline the recent computational developments in horseshoe shrinkage for complex models along with a list of available software implementations that allows one to venture out beyond the comfort zone of the canonical linear regression problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.