2021
DOI: 10.3389/fgene.2021.738274
|View full text |Cite
|
Sign up to set email alerts
|

iCDI-W2vCom: Identifying the Ion Channel–Drug Interaction in Cellular Networking Based on word2vec and node2vec

Abstract: Ion channels are the second largest drug target family. Ion channel dysfunction may lead to a number of diseases such as Alzheimer’s disease, epilepsy, cephalagra, and type II diabetes. In the research work for predicting ion channel–drug, computational approaches are effective and efficient compared with the costly, labor-intensive, and time-consuming experimental methods. Most of the existing methods can only be used to deal with the ion channels of knowing 3D structures; however, the 3D structures of most i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 43 publications
0
5
0
Order By: Relevance
“…We evaluated the BERT_Mean + DWT feature extraction method and compared it with several other classic protein and drug feature extraction methods, such as Pr ord2vec (a 64-D vector is obtained to represent the protein, it was extracted by an un-supervised word2vec model and implicated important biophysical and biochemical information ( Yang et al, 2018 ; Zhang et al, 2020 ), BERT_First (the first row of is obtained to represented protein, it is a 1024-D vector) ( Nambiar et al, 2020 ), FP2_Word2vec ( Jaeger et al, 2018 ), drug_Node2vec ( Grover and Leskovec, 2016 ; Tetko et al, 2020 ), drug_Word2vec ( Zhang et al, 2020 ; Zheng et al, 2021 ), drug_GCN ( Chen et al, 2020 ). Figures 3 – 6 show the Matthews correlation coefficient (MCC) for the datasets , , , and obtained for each approach in CNN + BRL classifier via 10-fold cross validation.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We evaluated the BERT_Mean + DWT feature extraction method and compared it with several other classic protein and drug feature extraction methods, such as Pr ord2vec (a 64-D vector is obtained to represent the protein, it was extracted by an un-supervised word2vec model and implicated important biophysical and biochemical information ( Yang et al, 2018 ; Zhang et al, 2020 ), BERT_First (the first row of is obtained to represented protein, it is a 1024-D vector) ( Nambiar et al, 2020 ), FP2_Word2vec ( Jaeger et al, 2018 ), drug_Node2vec ( Grover and Leskovec, 2016 ; Tetko et al, 2020 ), drug_Word2vec ( Zhang et al, 2020 ; Zheng et al, 2021 ), drug_GCN ( Chen et al, 2020 ). Figures 3 – 6 show the Matthews correlation coefficient (MCC) for the datasets , , , and obtained for each approach in CNN + BRL classifier via 10-fold cross validation.…”
Section: Resultsmentioning
confidence: 99%
“…Recently, many word-embedding methods have been used for protein feature extraction, for example, Zheng et al identified the ion channel-drug interaction using both word2vec and node2vec as molecular representation learning methods ( Zheng et al, 2021 ). However, there are still imperfect, like in these word-embedding methods may map every word with their unique vector, therefore this representation is context-independent.…”
Section: Methodsmentioning
confidence: 99%
“…Once the benchmark dataset has been prepared for the study, the next important step is formulating the samples and extracting the best feature set for constructing a robust and superior computational model. In recent years, various feature encoding strategies have been used to form biological sequence fragments, such as PseKNC ( 17 ), One-hot ( 21 , 22 ), physicochemical features, and word2vec ( 23 25 ). This study selected some of the most common feature encoding approaches, including six physicochemical feature encoding strategies and the frequency of occurrence of k-nearest neighbor nucleic acids, to describe RNA fragments.…”
Section: Methodsmentioning
confidence: 99%
“…For protein feature extraction, numerous word-embedding techniques were recently utilized [ 35 ], but these techniques may map each word to its vector, making this representation context-independent. In response to the exponential growth of textual data, the first fine-tuning-based representation model, bidirectional encoder representations from transformers (BERT) [ 36 ], can generate distinct representations for the same word based on context [ 36 , 37 ].…”
Section: Methodsmentioning
confidence: 99%