“…In order to evaluate the classification performance and to determine which is the best algorithm for each group, we have used two measures that have previously been used to evaluate classification algorithm recommendation methods (Song et al, 2012). The first is called ARE (Average Recommendation Error) and it measures the average error of the current recommendation (predicted aggregation method) regarding the best and the worst recommendation (best and worst aggregation methods from the list of methods ordered from the lowest to the highest RMSE), as expressed in equation 5:…”
Section: Selection Of An Aggregation Methodsmentioning
“…In order to evaluate the classification performance and to determine which is the best algorithm for each group, we have used two measures that have previously been used to evaluate classification algorithm recommendation methods (Song et al, 2012). The first is called ARE (Average Recommendation Error) and it measures the average error of the current recommendation (predicted aggregation method) regarding the best and the worst recommendation (best and worst aggregation methods from the list of methods ordered from the lowest to the highest RMSE), as expressed in equation 5:…”
Section: Selection Of An Aggregation Methodsmentioning
One possible approach to tackle the class imbalance in classification tasks is to resample a training dataset, i.e., to drop some of its elements or to synthesize new ones. There exist several widely-used resampling methods. Recent research showed that the choice of resampling method significantly affects the quality of classification, which raises the resampling selection problem. Exhaustive search for optimal resampling is time-consuming and hence it is of limited use. In this paper, we describe an alternative approach to the resampling selection. We follow the meta-learning concept to build resampling recommendation systems, i.e., algorithms recommending resampling for datasets on the basis of their properties.
“…The neighbor recognition is done by using K-NN approach [14]. In this approach the distance of new dataset with respect to old dataset is calculate.…”
Section: Neighbor Recognitionmentioning
confidence: 99%
“…The Neighbor selection and recommendation [14] these algorithms are used in prediction model. The distance of new meta-features is calculated with respect to knowledge base.…”
Section: Experiment-2mentioning
confidence: 99%
“…Three classifiers of respective nearest dataset are found out by highest regression value. The receptive classifiers win, draw and loss is calculated [14] and in accuracy prediction model the win classifier is recommended as best classifier.…”
Knowledge discovery is the data mining task. Number of classification algorithms is present for knowledge discovery task in data mining. Each algorithm is differentiating with another based on their performance. No free lunch theorem [1] states that there no single prediction of algorithm is not possible for all kind of datasets. This implies that performance value of algorithm changes according to dataset characteristics. Non-expert can't understand which will be best classifier for his/her dataset. Meta-learning is one machine learning technique which supports non-expert users for selecting classifier. In meta learning dataset characteristics well know as meta-features. Based on these meta-features the prediction of well suitable classifier is done. In this paper, in the first experiment, the prediction classifier is done by landmarking meta-features with k-NN approach. In the second experiment in addition to first experiment Win/ draw/ loss of corresponding classifiers is calculated using recommendation method and based on that the best classifier is recommended. Here the simple linear regression value of classifiers is taken into consideration. In both the experiments performance measure is the accuracy of classifier.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.