Modern day drug discovery is extremely expensive and time consuming. Although computational approaches help accelerate and decrease the cost of drug discovery, existing computational software packages for docking-based drug discovery suffer from both low accuracy and high latency. A few recent machine learning-based approaches have been proposed for virtual screening by improving the ability to evaluate protein−ligand binding affinity, but such methods rely heavily on conventional docking software to sample docking poses, which results in excessive execution latencies. Here, we propose and evaluate a novel graph neural network (GNN)-based framework, MedusaGraph, which includes both pose-prediction (sampling) and pose-selection (scoring) models. Unlike the previous machine learning-centric studies, MedusaGraph generates the docking poses directly and achieves from 10 to 100 times speedup compared to state-of-the-art approaches, while having a slightly better docking accuracy.
BackgroundPatients with chronic renal disease should be vaccinated as soon as dialysis is forestalled, and this could improve the seroconversion of hepatitis B vaccination.ObjectivesIn this study, we aimed to compare seroconversion and immune response rates using 4 doses of 40 μg and 3 doses of 20 μg Euvax B recombinant Hepatitis B surface Antigen (HBs Ag) vaccine administered to predialysis patients with chronic kidney disease (CKD).Patients and MethodsIn an open, randomized clinical trial, we compared seroconversion rates in 51 predialysis patients with mild and moderate chronic renal failure who received either 4 doses of 40 μg or 3 doses of 20 μg of Euvax B recombinant hepatitis B vaccine administered at 0, 1, 2, 6 and 0, 1, 6 months, respectively.ResultsDifferences in seroconversion rates after 4 doses of 40 μg (80.88%) compared to 3 doses of 20 μg (92%) were not significant (P = 0.4124). The mean HBs antibody level after 4 doses of 40 μg at 0, 1, 2, and 6 months (182.2 ± 286.7) was significantly higher than that after 3 doses of 40 μg at 0,1, and 6 months (96.9 ± 192.1) (P = 0.004). Seroconversion after 4 doses of 40 μg (80.8%) was also significantly higher than that after 3 doses of 40 μg (77%) (P = 0.004). Multivariable analysis showed that none of the variables contributed to seroconversion.ConclusionsWe found that 4 doses of 40 μg did not lead to significantly more seroconversion than 3 doses of 20 μg.
Despite the recent success of Graph Neural Networks (GNNs), training GNNs on large graphs remains challenging. The limited resource capacities of the existing servers, the dependency between nodes in a graph, and the privacy concern due to the centralized storage and model learning have spurred the need to design an effective distributed algorithm for GNN training. However, existing distributed GNN training methods impose either excessive communication costs or large memory overheads that hinders their scalability. To overcome these issues, we propose a communication-efficient distributed GNN training technique named Learn Locally, Correct Globally (LLCG). To reduce the communication and memory overhead, each local machine in LLCG first trains a GNN on its local data by ignoring the dependency between nodes among different machines, then sends the locally trained model to the server for periodic model averaging. However, ignoring node dependency could result in significant performance degradation. To solve the performance degradation, we propose to apply Global Server Corrections on the server to refine the locally learned models. We rigorously analyze the convergence of distributed methods with periodic model averaging for training GNNs and show that naively applying periodic model averaging but ignoring the dependency between nodes will suffer from an irreducible residual error. However, this residual error can be eliminated by utilizing the proposed global corrections to entail fast convergence rate. Extensive experiments on real-world datasets show that LLCG can significantly improve the efficiency without hurting the performance.
Graph Convolutional Networks (GCNs) are known to suffer from performance degradation as the number of layers increases, which is usually attributed to oversmoothing. Despite the apparent consensus, we observe that there exists a discrepancy between the theoretical understanding of over-smoothing and the practical capabilities of GCNs. Specifically, we argue that over-smoothing does not necessarily happen in practice, a deeper model is provably expressive, can converge to global optimum with linear convergence rate, and achieve very high training accuracy as long as properly trained. Despite being capable of achieving high training accuracy, empirical results show that the deeper models generalize poorly on the testing stage and existing theoretical understanding of such behavior remains elusive. To achieve better understanding, we carefully analyze the generalization capability of GCNs, and show that the training strategies to achieve high training accuracy significantly deteriorate the generalization capability of GCNs. Motivated by these findings, we propose a decoupled structure for GCNs that detaches weight matrices from feature propagation to preserve the expressive power and ensure good generalization performance. We conduct empirical evaluations on various synthetic and real-world datasets to validate the correctness of our theory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.