Periodic boundary conditions are natural in many scientific problems, and often lead to particular symmetries. Working with datasets that express periodicity properties requires special approaches when analyzing these phenomena. Periodic boundary conditions often help to solve or describe the problem in a much simpler way. The angular rotational symmetry is an example of periodic boundary conditions. This symmetry implies angular momentum conservation. On the other hand, clustering is one of the first and most basic methods used in data analysis. It is often a starting point when new data are acquired and understood. K-means clustering is one of the most commonly used clustering methods. It can be applied to many different situations with reasonably good results. Unfortunately, the original k-means approach does not cope well with the periodic properties of the data. For example, the original k-means algorithm treats a zero angle as very far from an angle that is 359 degrees. Periodic boundary conditions often change the classical distance measure and introduce an error in k-means clustering. In the paper, we discuss the problem of periodicity in the dataset and present a periodic k-means algorithm that modifies the original approach. Considering that many data scientists prefer on-the-shelf solutions, such as libraries available in Python, we present how easily they can incorporate periodicity into existing k-means implementation in the PyClustering library. It allows anyone to integrate periodic conditions without significant additional costs. The paper evaluates the described method using three different datasets: the artificial dataset, wind direction measurement, and the New York taxi service dataset. The proposed periodic k-means provides better results when the dataset manifests some periodic properties.
Abstract-In this paper, a method for calculating the importance factor of continuous features from a given set of patterns is presented. A real problem in many practical cases, like medical data, is to find which parts of patterns are crucial for correct classification. This leads to the need of preprocessing all data, which has influence on both time and accuracy of applied methods (when unimportant data hide those which are important). There are some methods that allow selection of important features for binary and sometimes discrete data or, after some preprocessing, continuous data. Very often however, such conversion is burdened with the risk of losing important data, which is a result of lack of knowledge of optimal discretization consequence. Proposed method allows to avoid that problem, because it is based on original, non-transformed continuous data. Two factors -concentration and diversity -are defined and are used to calculate the importance factor for each feature and pattern. Based on those factors e.g. unimportant features can be identified to decrease dimension of input data or ''bad'' patterns can be detected to improve classification. An example how proposed method can be used to improve decision tree is given as well.
The classification of multi-dimensional patterns is one of the most popular and often most challenging problems of machine learning. That is why some new approaches are being tried, expected to improve existing ones. The article proposes a new technique based on the decision network called self-optimizing neural networks (SONN). The proposed approach works on discretized data. Using a special procedure, we assign a feature vector to each element of the real-valued dataset. Later the feature vectors are analyzed, and decision patterns are created using so-called discriminants. We focus on how these discriminants are used and influence the final classifier prediction. Moreover, we also discuss the influence of the neighborhood topology. In the article, we use three different datasets with different properties. All results obtained by derived methods are compared with those obtained with the well-known support vector machine (SVM) approach. The results prove that the proposed solutions give better results than SVM. We can see that the information obtained from a training set is better generalized, and the final accuracy of the classifier is higher.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.