“…Now, we have trained the LR-based model by estimating these unknown parameters using a maximum likelihood estimator over the training dataset. Using these estimated parameters, we have predicted the class label Classifiers Search range of each parameter GPC kernel= ("RBF", "DotProducts", "RationalQuadratic"), length-scale= (1 to 5), alpha= (0.04, 0.05, 0.06), sigma= (0.01, 0.02, 0.03, 0.05, 0.06, 0.07, 0.08, 0.09) RF max_depth= (2,3,5, None), n_estimators= (15,30,60,120), min_samples_split= (2,3,10), min_samples_leaf= (1,3,10), bootstrap= ("True", "False"), criterion= ("gini", "entropy") k-NN n_neighbors= (2 to 13), leaf_size= (4,5,6) MLP hidden_layer_sizes= [(120,120,50), (60,120,50), (60, 240, 100)), activation= ('relu','tanh','logistic'), alpha= (0.01, 0.05, 0.001), solver= ('adam'), learning_rate= ("constant", "adaptive"] DT max_features= ("auto", "sqrt", "log2"), min_samples_split= (2 to 15), min_samples_leaf= (1 to 11) LR None or response variable (here, ADHD and healthy controls) over the test dataset and also computed the probability of the response variable or class label.…”