Deep-learning (DL) methods like DeepMind’s AlphaFold2 (AF2) have led to substantial improvements in protein structure prediction. We analyse confident AF2 models from 21 model organisms using a new classification protocol (CATH-Assign) which exploits novel DL methods for structural comparison and classification. Of ~370,000 confident models, 92% can be assigned to 3253 superfamilies in our CATH domain superfamily classification. The remaining cluster into 2367 putative novel superfamilies. Detailed manual analysis on 618 of these, having at least one human relative, reveal extremely remote homologies and further unusual features. Only 25 novel superfamilies could be confirmed. Although most models map to existing superfamilies, AF2 domains expand CATH by 67% and increases the number of unique ‘global’ folds by 36% and will provide valuable insights on structure function relationships. CATH-Assign will harness the huge expansion in structural data provided by DeepMind to rationalise evolutionary changes driving functional divergence.
Over the last year, there have been substantial improvements in protein structure prediction, particularly in methods like DeepMind's AlphaFold2 (AF2) that exploit deep learning strategies. Here we report a new CATH-Assign protocol which is used to analyse the first tranche of AF2 models predicted for 21 model organisms and discuss insights these models bring on the nature of protein structure space. We analyse good quality models and those with no unusual structural characteristics, i.e., features rarely seen in experimental structures. For the ~370,000 models that meet these criteria, we observe that 92% can be assigned to evolutionary superfamilies in CATH. The remaining domains cluster into 2,367 putative novel superfamilies. Detailed manual analysis on a subset of 618 of those which had at least one human relative revealed some extremely remote homologies and some further unusual features, but 26 could be confirmed as novel superfamilies and one of these has an alpha-beta propeller architectural arrangement never seen before. By clustering both experimental and predicted AF2 domain structures into distinct 'global fold' groups, we observe that the new AF2 models in CATH increase information on structural diversity by 36%. This expansion in structural diversity will help to reveal associated functional diversity not previously detected. Our novel CATH-Assign protocol scales well and will be able to harness the huge expansion (at least 100 million models) in structural data promised by DeepMind to provide more comprehensive coverage of even the most diverse superfamilies to help rationalise evolutionary changes in their functions.
CATH is a protein domain classification resource that combines an automated workflow of structure and sequence comparison alongside expert manual curation to construct a hierarchical classification of evolutionary and structural relationships. The aim of this study was to develop algorithms for detecting remote homologues that might be missed by state-of-the-art HMM-based approaches. The proposed algorithm for this task (CATHe) combines a neural network with sequence representations obtained from protein language models. The employed dataset consisted of remote homologues that had less than 20% sequence identity. The CATHe models trained on 1773 largest, and 50 largest CATH superfamilies had an accuracy of 85.6+-0.4, and 98.15+-0.30 respectively. To examine whether CATHe was able to detect more remote homologues than HMM-based approaches, we employed a dataset consisting of protein regions that had annotations in Pfam, but not in CATH. For this experiment, we used highly reliable CATHe predictions (expected error rate <0.5%), which provided CATH annotations for 4.62 million Pfam domains. For a subset of these domains from homo sapiens, we structurally validated 90.86% of the predictions by comparing their corresponding AlphaFold structures with experimental structures from the CATHe predicted superfamilies.
Motivation CATH is a protein domain classification resource that exploits an automated workflow of structure and sequence comparison alongside expert manual curation to construct a hierarchical classification of evolutionary and structural relationships. The aim of this study was to develop algorithms for detecting remote homologues missed by state-of-the-art HMM-based approaches. The method developed (CATHe) combines a neural network with sequence representations obtained from protein Language Models. It was assessed using a dataset of remote homologues having less than 20% sequence identity to any domain in the training set. Results The CATHe models trained on 1773 largest and 50 largest CATH superfamilies had an accuracy of 85.6 ± 0.4%, and 98.2 ± 0.3% respectively. As a further test of the power of CATHe to detect more remote homologues missed by HMMs derived from CATH domains, we used a dataset consisting of protein domains that had annotations in Pfam, but not in CATH. By using highly reliable CATHe predictions (expected error rate <0.5%), we were able to provide CATH annotations for 4.62 million Pfam domains. For a subset of these domains from Homo sapiens, we structurally validated 90.86% of the predictions by comparing their corresponding AlphaFold 2 structures with structures from the CATH superfamilies to which they were assigned. Availability and Implementation The code for the developed models can be found on https://github.com/vam-sin/CATHe. Supplementary information Supplementary data are available at Bioinformatics online.
We propose an efficient method for simulating a cryo-Electron Tomography (cryo-ET) image of a target macromolecule with several neighbor macromolecules packed to achieve a realistic crowded cytoplasm content. The simulated results are subtomograms with corresponding noise-free 3D density maps and pre-specified labels (PDB ID, center locations, and orientations) to assist bioimage analysis. They can serve as benchmark datasets for testing developing cryo-ET analysis algorithms and as training datasets with readily available ground truth labels for learning neural network models. The COVID-19 pandemic has sparked a global health crisis that severely impacting lives worldwide. As an important application, we simulated the scene of SARS-CoV-2 interacting with the host cell. The simulated cryo-ET images clearly showed the binding domain of the virus and the host cell to facilitate the research of SARS-CoV-2' infection. We also trained two different classification models to demonstrate that our simulated cryo-ET data is able to assist the cryo-ET analysis task and to validate the performance between different methods.
Cysteine (Cys) is the most reactive amino acid participating in a wide range of biological functions. In-silico predictions complement the experiments to meet the need of functional characterization. Multiple Cys function prediction algorithm is scarce, in contrast to specific function prediction algorithms. Here we present a deep neural network-based multiple Cys function prediction, available on web-server (DeepCys) (https://deepcys.herokuapp.com/). DeepCys model was trained and tested on two independent datasets curated from protein crystal structures. This prediction method requires three inputs, namely, PDB identifier (ID), chain ID and residue ID for a given Cys and outputs the probabilities of four cysteine functions, namely, disulphide, metal-binding, thioether and sulphenylation and predicts the most probable Cys function. The algorithm exploits the local and global protein properties, like, sequence and secondary structure motifs, buried fractions, microenvironments and protein/ enzyme class. DeepCys outperformed most of the multiple and specific Cys function algorithms. This method can predict maximum number of cysteine functions. Moreover, for the first time, explicitly predicts thioether function. This tool was used to elucidate the cysteine functions on domains of unknown functions belonging to cytochrome C oxidase subunit-II like transmembrane domains.
Classifying proteins into their respective enzyme class is an interesting question for researchers for a variety of reasons. The open source Protein Data Bank (PDB) contains more than 1,60,000 structures, with more being added everyday. This paper proposes an attention-based bidirectional-LSTM model (ABLE) trained on oversampled data generated by SMOTE to analyse and classify a protein into one of the six enzyme classes or a negative class using only the primary structure of the protein described as a string by the FASTA sequence as an input. We achieve the highest F1-score of 0.834 using our proposed model on a dataset of proteins from the PDB. We baseline our model against seventeen other machine learning and deep learning models, including CNN, LSTM, BILSTM and GRU. We perform extensive experimentation and statistical testing to corroborate our results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.