2021
DOI: 10.1007/s12539-021-00481-0
|View full text |Cite
|
Sign up to set email alerts
|

Anti-cancer Peptide Recognition Based on Grouped Sequence and Spatial Dimension Integrated Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 29 publications
0
8
0
Order By: Relevance
“…For further verifying the effectiveness of our method, we compared ACPPfel with the existing methods including ACP-DL ( Yi et al, 2019 ), DeepACPpred ( Lane and Kahanda, 2021 ), ACP-MHCNN ( Ahmed et al, 2021 ), GRCI-Net ( You et al, 2022 ), StackACPred ( Mishra et al, 2019 ) on the cross-validation datasets and iACP ( Chen et al, 2016 ), PEPred-Suite ( Wei et al, 2019 ), ACPpred-Fuse ( Rao et al, 2020b ), ACPred-FL ( Wei et al, 2018 ), ACPred ( Schaduangrat et al, 2019 ), AntiCP ( Kumar and Li, 2017 ), DeepACPpred, AntiCP_2.0 ( Agrawal et al, 2021b ), iACP-DRLF ( Lv et al, 2021b ), ME-ACP ( Feng et al, 2022 ) on alternative independent datasets.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…For further verifying the effectiveness of our method, we compared ACPPfel with the existing methods including ACP-DL ( Yi et al, 2019 ), DeepACPpred ( Lane and Kahanda, 2021 ), ACP-MHCNN ( Ahmed et al, 2021 ), GRCI-Net ( You et al, 2022 ), StackACPred ( Mishra et al, 2019 ) on the cross-validation datasets and iACP ( Chen et al, 2016 ), PEPred-Suite ( Wei et al, 2019 ), ACPpred-Fuse ( Rao et al, 2020b ), ACPred-FL ( Wei et al, 2018 ), ACPred ( Schaduangrat et al, 2019 ), AntiCP ( Kumar and Li, 2017 ), DeepACPpred, AntiCP_2.0 ( Agrawal et al, 2021b ), iACP-DRLF ( Lv et al, 2021b ), ME-ACP ( Feng et al, 2022 ) on alternative independent datasets.…”
Section: Resultsmentioning
confidence: 99%
“…For further verifying the effectiveness of our method, we compared ACPPfel with the existing methods including ACP-DL (Yi et al, 2019), DeepACPpred (Lane and Kahanda, 2021), ACP-MHCNN (Ahmed et al, 2021), GRCI-Net (You et al, 2022), The ROC Curve performance of independent validation dataset of ACP740 dataset. The ROC Curve performance of independent validation dataset of ACPfel dataset.…”
Section: Comparison With the State-of-the-art Approachesmentioning
confidence: 99%
“…To demonstrate the effectiveness of our designed model, we compared ACP-BC with other state-of-the-art methods. We evaluated the performance of ACP-DA [47], ACP-DL [43], GRCI-Net [50], DeepACPpred [51], ACP-MHCNN [40], ACP-check [49], and other methods using the same ACP740 and ACP240 datasets for a fair comparison. Compared to ACP-DA, our model achieved better results with the combined use of two feature enhancement methods.…”
Section: Comparison Of Existing Methodsmentioning
confidence: 99%
“…Zhu et al [49] developed the ACP-check model, which uses LSTM networks to extract time-dependent information from peptide sequences for anticancer peptides to be identified effectively. You et al [50] fused the sparse matrix features of BPF and the k-mer sparse matrix to construct a new bidirectional short-term memory network, which achieves the prediction of anticancer peptides through two sets of dense network layers. The aforementioned studies demonstrate significant advancements in the field of computational peptide-based cancer research.…”
Section: Introductionmentioning
confidence: 99%
“… Rao et al (2020) designed a multi-view feature extraction model called ACPred-Fuse, which employed a total of 29 different sequence-based feature encoding methods and used RF to further select features. You et al ’s (2022) GRCI-Net model used binary structure and K-mer sparse matrix to extract the features of peptide sequences, and used principal component analysis to fuse the two features. The output was then fed into a classifier composed of bidirectional long short-term memory network (Bi-LSTM) and CNN.…”
Section: Introductionmentioning
confidence: 99%