2022
DOI: 10.1186/s12859-022-04628-8
|View full text |Cite
|
Sign up to set email alerts
|

A-Prot: protein structure modeling using MSA transformer

Abstract: Background The accuracy of protein 3D structure prediction has been dramatically improved with the help of advances in deep learning. In the recent CASP14, Deepmind demonstrated that their new version of AlphaFold (AF) produces highly accurate 3D models almost close to experimental structures. The success of AF shows that the multiple sequence alignment of a sequence contains rich evolutionary information, leading to accurate 3D models. Despite the success of AF, only the prediction code is ope… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 42 publications
0
3
0
Order By: Relevance
“…Concurrent with advances in structure prediction, self-supervised learning on massive sets of unlabeled protein sequences has shown remarkable utility across protein modeling tasks (17, 18). Embeddings from transformer encoder models trained for masked language modeling have been used for variant prediction (19), evolutionary analysis (20, 21), and as features for protein structure prediction (22, 23). Auto-regressive transformer models have been used to generate functional proteins entirely from sequence learning (24).…”
Section: Introductionmentioning
confidence: 99%
“…Concurrent with advances in structure prediction, self-supervised learning on massive sets of unlabeled protein sequences has shown remarkable utility across protein modeling tasks (17, 18). Embeddings from transformer encoder models trained for masked language modeling have been used for variant prediction (19), evolutionary analysis (20, 21), and as features for protein structure prediction (22, 23). Auto-regressive transformer models have been used to generate functional proteins entirely from sequence learning (24).…”
Section: Introductionmentioning
confidence: 99%
“…Concurrent with advances in structure prediction, self-supervised learning on massive sets of unlabeled protein sequences has shown remarkable utility across protein modeling tasks 18,19 . Embeddings from transformer encoder models trained for masked language modeling have been used for variant prediction 20 , evolutionary analysis 21,22 , and as features for protein structure prediction 23,24 . Auto-regressive transformer models have been used to generate functional proteins entirely from sequence learning 25 .…”
mentioning
confidence: 99%
“…A-Port (Hong et al . 2022 ) performs residue contact prediction using MSA-Transformer and inputs the predicted pairs of contacting residues into PyRosetta for protein structure prediction. The analysis results show that the quality of structures predicted by A-Port exceeds the current best structure prediction methods, but it is still far from solving the problem of protein structure prediction.…”
Section: Language Models For Proteinsmentioning
confidence: 99%