Biology has become a data-intensive science. Recent technological advances in single-cell genomics have enabled the measurement of multiple facets of cellular state, producing datasets with millions of single-cell observations. While these data hold great promise for understanding molecular mechanisms in health and disease, analysis challenges arising from sparsity, technical and biological variability, and high dimensionality of the data hinder the derivation of such mechanistic insights. To promote the innovation of algorithms for analysis of multimodal single-cell data, we organized a competition at NeurIPS 2021 applying the Common Task Framework to multimodal single-cell data integration. For this competition we generated the first multimodal benchmarking dataset for single-cell biology and defined three tasks in this domain: prediction of missing modalities, aligning modalities, and learning a joint representation across modalities. We further specified evaluation metrics and developed a cloud-based algorithm evaluation pipeline. Using this setup, 280 competitors submitted over 2600 proposed solutions within a 3 month period, showcasing substantial innovation especially in the modality alignment task. Here, we present the results, describe trends of well performing approaches, and discuss challenges associated with running the competition.
Recent progress in Single-Cell Genomics have produced different library protocols and techniques for profiling of one or more data modalities in individual cells. Machine learning methods have separately addressed specific integration challenges (libraries, samples, paired-unpaired data modalities). We formulate a unifying data-driven methodology addressing all these challenges. To this end, we design a hybrid architecture using an autoencoder (AE) network together with adversarial learning by a cycleGAN (cGAN) network, jointly referred to as scAEGAN. The AE learns a low-dimensional embedding of each condition, whereas the cGAN learns a non-linear mapping between the AE representations. The core insight is that the AE respects each sample’s uniqueness, whereas the cGAN exploits the distributional data similarity in the latent space. We evaluate scAEGAN using simulated data and real datasets of a single-modality (scRNA-seq), different library preparations (Fluidigm C1, CelSeq, CelSeq2, SmartSeq), and several data modalities such as paired scRNA-seq and scATAC-seq. We find that scAEGAN outperforms Seurat3 in library integration, is more robust against data sparsity, and beats Seurat 4 in integrating paired data from the same cell. Furthermore, in predicting one data modality from another, scAEGAN outperforms Babel. We conclude scAEGAN surpasses current state-of-the-art methods across several seemingly different integration challenges.
Single-cell multi-omics technologies enable profiling of several data-modalities from the same cell. We designed LIBRA, a Neural Network based framework, for learning translations between paired multi-omics profiles into a shared latent space. We demonstrate LIBRA to be state-of-the-art for multi-omics clustering. In addition, LIBRA is more robust with decreasing cell-numbers compared with existing tools. Training LIBRA on paired data-sets, LIBRA predicts multi-omic profiles using only a single data-modality from the same biological system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.