Proceedings of the 2017 SIAM International Conference on Data Mining 2017
DOI: 10.1137/1.9781611974973.6
|View full text |Cite
|
Sign up to set email alerts
|

Computational Drug Discovery with Dyadic Positive-Unlabeled Learning

Abstract: Computational Drug Discovery, which uses computational techniques to facilitate and improve the drug discovery process, has aroused considerable interests in recent years. Drug Repositioning (DR) and DrugDrug Interaction (DDI) prediction are two key problems in drug discovery and many computational techniques have been proposed for them in the last decade. Although these two problems have mostly been researched separately in the past, both DR and DDI can be formulated as the problem of detecting positive inter… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(6 citation statements)
references
References 25 publications
0
6
0
Order By: Relevance
“…In the field of drug discovery, the tasks of drug repositioning, which looks for interactions between drugs and diseases, and drug-drug-interactions are very important. To find these interactions, a pairwise scoring function can be trained so that known interactions score higher than pairs which are not known to interact [64]. The rationale behind this method is similar to RSVM [92].…”
Section: Applicationsmentioning
confidence: 99%
“…In the field of drug discovery, the tasks of drug repositioning, which looks for interactions between drugs and diseases, and drug-drug-interactions are very important. To find these interactions, a pairwise scoring function can be trained so that known interactions score higher than pairs which are not known to interact [64]. The rationale behind this method is similar to RSVM [92].…”
Section: Applicationsmentioning
confidence: 99%
“…where E is the set of positive hyperlinks, F is the set of negative hyperlinks, and σðÁÞ = log 1 + expðÁÞ ð Þis the logistic function 22,54 . We chose the above loss function since it offers a better performance compared to traditional classification loss functions such as cross entropy loss.…”
Section: Training Algorithmmentioning
confidence: 99%
“…The set of unknown relationships i.e., 2 V − E may, in fact, contain undiscovered hyperlinks and belong to the existing ones. Following prior work [26], we rely on a ranking objective as follows:…”
Section: Hyperlink Scoring Layermentioning
confidence: 99%
“…The problem of link prediction in graphs has numerous applications [20] in the fields of social network analysis [25], knowledge bases [29], bioinformatics [26] to name a few. However, in many realworld problems relationships go beyond pairwise associations.…”
Section: Introductionmentioning
confidence: 99%