2019
DOI: 10.3390/molecules24213909
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Target Chemometric Modelling, Fragment Analysis and Virtual Screening with ERK Inhibitors as Potential Anticancer Agents

Abstract: Two isoforms of extracellular regulated kinase (ERK), namely ERK-1 and ERK-2, are associated with several cellular processes, the aberration of which leads to cancer. The ERK-1/2 inhibitors are thus considered as potential agents for cancer therapy. Multitarget quantitative structure–activity relationship (mt-QSAR) models based on the Box–Jenkins approach were developed with a dataset containing 6400 ERK inhibitors assayed under different experimental conditions. The first mt-QSAR linear model was built with l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 19 publications
(15 citation statements)
references
References 86 publications
(151 reference statements)
1
14
0
Order By: Relevance
“…Yet two additional techniques were included for establishing LDA models, namely fast-stepwise (FS-LDA) and sequential forward selection (SFS-LDA). Even though the GA implemented earlier in QSAR-Co has proved to be a highly efficient feature selection technique, judging from our previous analyses [10,19], the implementation of these feature selection techniques in QSAR-Co-X improves the scope of LDA modelling in multiple ways. Firstly, the application of more feature selection techniques enhances the chances of obtaining more predictive models especially for big data analysis [20].…”
Section: Motivationmentioning
confidence: 93%
See 2 more Smart Citations
“…Yet two additional techniques were included for establishing LDA models, namely fast-stepwise (FS-LDA) and sequential forward selection (SFS-LDA). Even though the GA implemented earlier in QSAR-Co has proved to be a highly efficient feature selection technique, judging from our previous analyses [10,19], the implementation of these feature selection techniques in QSAR-Co-X improves the scope of LDA modelling in multiple ways. Firstly, the application of more feature selection techniques enhances the chances of obtaining more predictive models especially for big data analysis [20].…”
Section: Motivationmentioning
confidence: 93%
“…At the same time, different training and validation sets 8 may be obtained by changing the random seed values. As an alternative to random data-splitting, the user may opt for a k-Means Cluster Analysis-based rational dataset division strategy (kMCA) [19,22]. In the latter option, the dataset is first divided into n (user specific) clusters on the basis of input descriptors.…”
Section: Module 1 (Lm)mentioning
confidence: 99%
See 1 more Smart Citation
“…The first step of any mt-QSAR model encompasses a division of the initial dataset into a training and a validation set. In this module, that may be performed following three schemes, namely: (a) pre-determined data distribution, (b) random division and (c) k -means cluster analysis ( k MCA) based data division [ 20 ]. In the first scheme (a), the user is allowed to explicitly provide information about the training and validation set samples, i.e ., the set samples are to be tagged as ‘Train’ and ‘Test’, respectively.…”
Section: Methodsmentioning
confidence: 99%
“…As can be seen, two additional feature selection techniques were included for establishing LDA models, namely fast-stepwise (FS) and sequential forward selection (SFS). Even though the GA implemented earlier in QSAR-Co has proved to be a highly efficient feature selection technique, judging from our previous analyses [ 11 , 20 ], the implementation of these additional feature selection techniques in QSAR-Co-X improves the scope of LDA modelling in multiple ways. Firstly, the application of more feature selection techniques enhances the chances of obtaining more predictive models especially for big data analysis [ 21 ].…”
Section: Introductionmentioning
confidence: 99%