3D similarity is useful in predicting the profiles of unprecedented molecular frameworks that are 2D dissimilar to known compounds. When comparing pairs of compounds, 3D similarity of the pairs depends on conformational sampling, the alignment method, the chosen descriptors, and the similarity coefficients. In addition to these four factors, 3D chemocentric target prediction of an unknown compound requires compound–target associations, which replace compound-to-compound comparisons with compound-to-target comparisons. In this study, quantitative comparison of query compounds to target classes (one-to-group) was achieved via two types of 3D similarity distributions for the respective target class with parameter optimization for the fitting models: (1) maximum likelihood (ML) estimation of queries, and (2) the Gaussian mixture model (GMM) of target classes. While Jaccard–Tanimoto similarity of query-to-ligand pairs with 3D structures (sampled multi-conformers) can be transformed into query distribution using ML estimation, the ligand pair similarity within each target class can be transformed into a representative distribution of a target class through GMM, which is hyperparameterized via the expectation–maximization (EM) algorithm. To quantify the discriminativeness of a query ligand against target classes, the Kullback–Leibler (K–L) divergence of each query was calculated and compared between targets. 3D similarity-based K–L divergence together with the probability and the feasibility index, (Fm), showed discriminative power with regard to some query–class associations. The K–L divergence of 3D similarity distributions can be an additional method for (1) the rank of the 3D similarity score or (2) the p-value of one 3D similarity distribution to predict the target of unprecedented drug scaffolds.
The COVID-19 pandemic has severely affected various global markets, increasing the need for new forecasting models for the dry bulk market. Therefore, this study proposes deep neural network (abbreviated DNN) architectures to build a model for momentary forecasting that does not affect accuracy in the case of economic shocks (i.e., COVID-19) and elucidates the strategy for obtaining DNNs. First, since momentary and short-term forecastings are fundamentally different, they might use independent methods; as such, I apply DNN for the time series classification to momentary forecasting. Second, the proposed architecture is constructed by considering sparsity, because designing DNN architectures robust to any impacts is a type of overfitting prevention for deep neural networks. Finally, this study proposes indices for quantitatively evaluating the DNN architectures that represent the realized forecasting performance of various deep neural networks. Using these indices, I demonstrate that optimal architectures may need to have model sparsity in the DNN (i.e., sparsity independent of the input data). The importance of this issue has been demonstrated experimentally. As a result, the architectures achieved target performances of 88%, 91%, and 79% accuracy and had stability for Panamax, Supramax, and Capesize vessels, respectively from February 2016 to September 2021 (i.e., five years and eight months). It is difficult to identify a correlation between model performance and volatility. Furthermore, before and after the COVID-19 shock, the performance of the proposed models compared to the optimal one exceeds that of other four recent models, namely "Facebook Prophet," "DARTS," "SKTIME," and "AutoTS".
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.