One of the most popular and fundamental methods used for machine learning classification is KNN (K-nearest neighbor). Despite its simplicity, this method can achieve good data classification results even without prior knowledge of the data distribution. WKNN (weighted KNN) is an improvement of KNN where, instead of merely counting the number of nearby neighbors, the system assigns a weight to each neighbor. Typically, this weight is defined by the inverse of the squared distance (\(weight=\frac{1}{{d}^{2}}\)). This study aims to present an alternative way to define the weight (\(weight=\frac{wp}{1+{\left|cd\right|}^{n}}\)) and a methodology in which the weight formula is defined based on the position and the training data. It was found that, in this dataset, the presented methodology achieves results that are 9% better than KNN and 8% better than WKNN.