2023
DOI: 10.1093/bioinformatics/btad778
|View full text |Cite
|
Sign up to set email alerts
|

TEFDTA: a transformer encoder and fingerprint representation combined prediction method for bonded and non-bonded drug–target affinities

Zongquan Li,
Pengxuan Ren,
Hao Yang
et al.

Abstract: Motivation The prediction of binding affinity between drug and target is crucial in drug discovery. However, the accuracy of current methods still needs to be improved. On the other hand, most deep learning methods focus only on the prediction of non-covalent (non-bonded) binding molecular systems, but neglect the cases of covalent binding, which has gained increasing attention in the field of drug development. Results In thi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 24 publications
0
2
0
Order By: Relevance
“…We compared our model with the following benchmark models: DeepDTA, DeepGS, WideDTA, GraphDTA, FusionDTA, AttentionDTA, DeepCDA, 23 FingerDTA, 24 and TEFDTA. 25 …”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We compared our model with the following benchmark models: DeepDTA, DeepGS, WideDTA, GraphDTA, FusionDTA, AttentionDTA, DeepCDA, 23 FingerDTA, 24 and TEFDTA. 25 …”
Section: Resultsmentioning
confidence: 99%
“…We compared our model with the following benchmark models: DeepDTA, DeepGS, WideDTA, GraphDTA, FusionD-TA, AttentionDTA, DeepCDA, 23 FingerDTA, 24 and TEFD-TA. 25 In Table 4, we listed the performance of all the aforementioned models evaluated on the Davis and KIBA data sets. As shown, ImageDTA outperforms most of the models.…”
Section: Comparison Of the Prediction Efficiencymentioning
confidence: 99%
“…Another work by Kang et al 112 adapts additional BERT-like pretrainings on the transformer encoders separately for either SMILES strings or protein sequences. TEFDTA 113 extends the analysis to bonded (valence) interactions. The authors represented molecules as MACCS fingerprints.…”
Section: Applications Of Transformers In Cheminformaticsmentioning
confidence: 99%
“…MRBDTA [ 24 ] introduces the Trans block, which improves the transformer’s encoder and incorporates skip connections at the encoder level to enhance the extraction of molecule features and the capability to identify interaction sites between proteins and drugs. TEFDTA [ 25 ] introduced an attention-based transformer encoder. This model utilizes converted drug SMILES to MACCS fingerprints to capture substructure information of drugs, enabling the prediction of binding affinity values for drug–target interactions.…”
Section: Introductionmentioning
confidence: 99%