2009
DOI: 10.1021/ci8004379
|View full text |Cite
|
Sign up to set email alerts
|

Influence Relevance Voting: An Accurate And Interpretable Virtual High Throughput Screening Method

Abstract: Given activity training data from Hight-Throughput Screening (HTS) experiments, virtual HighThroughput Screening (vHTS) methods aim to predict in silico the activity of untested chemicals. We present a novel method, the Influence Relevance Voter (IRV), specifically tailored for the vHTS task. The IRV is a low-parameter neural network which refines a k-nearest neighbor classifier by non-linearly combining the influences of a chemical's neighbors in the training set. Influences are decomposed, also non-linearly,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
58
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 54 publications
(58 citation statements)
references
References 44 publications
(79 reference statements)
0
58
0
Order By: Relevance
“…We also trained graph convolution models on some additional datasets in order to compare to the “neural fingerprints” (NFP) of Duvenaud et al [9] and the influence relevance voter (IRV) method of Swamidass et al [36] (see “Comparisons to other methods” section). Table 5 compares graph convolution models to published results on these datasets under similar cross-validation conditions.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We also trained graph convolution models on some additional datasets in order to compare to the “neural fingerprints” (NFP) of Duvenaud et al [9] and the influence relevance voter (IRV) method of Swamidass et al [36] (see “Comparisons to other methods” section). Table 5 compares graph convolution models to published results on these datasets under similar cross-validation conditions.…”
Section: Resultsmentioning
confidence: 99%
“…In this work, we emphasize their application to small molecules—undirected graphs of atoms connected by bonds—for virtual screening. Starting from simple descriptions of atoms, bonds between atoms, and pairwise relationships in a molecular graph, we have demonstrated performance that is comparable to state of the art multitask neural networks trained on traditional molecular fingerprint representations, as well as alternative methods including “neural fingerprints” [9] and influence relevance voter [36]. …”
Section: Discussionmentioning
confidence: 99%
“…The approach assesses the influence of structural neighbors of a compound on its classification as active or inactive. In initial benchmark investigations, IRV compound recall was at least comparable to SVMs [21]. Different from many other machine-learning approaches, IRV models are chemically interpretable.…”
Section: New Conceptsmentioning
confidence: 92%
“…As a general caveat, it is often difficult to understand whether complex LBVS protocols are truly required to identify active compounds or which of the components might have contributed most. Furthermore, as an exemplary new methodology, the influence relevance voter (IRV) has been introduced that combines a low-level neural network with a k-nearest neighbor classifier [21]. The approach assesses the influence of structural neighbors of a compound on its classification as active or inactive.…”
Section: New Conceptsmentioning
confidence: 99%
“…For the SVM algorithm, it is well known that the Tanimoto similarity satisfies Mercer's condition (Swamidass et al, 2005a), and therefore it defines a kernel that can be used in an SVM for classification (see Azencott et al, 2007;Mahé et al, 2006;Swamidass et al, 2005b, for other related kernels). Finally, we use the IRV algorithm (Swamidass et al, 2009), which can be viewed as a refinement of kNN. At a high-level, the IRV is defined by a preprocessing step, during which all the neighbors-as defined by the Tanimoto similarity metric-of a test molecule are identified, and a processing step during which information from each neighbor is fed into a neural network to produce a prediction.…”
Section: Classifiersmentioning
confidence: 99%