Genelleştirilmiş Regresyon Yapay Sinir Ağı (GRYSA) radyal tabanlı çalışan ve genellikle tahminleyici olarak kullanılan denetimliöğrenimli bir yapay sinir ağı (YSA) modelidir. Kolay modellenebilmesinin yanında hızlı ve tutarlı sonuçlar üretmesi bu algoritmanın güçlü yanlarını oluşturmaktadır. Ancak GRYSA tahmin mekanizmasında, eğitim veri setindeki her örnek veri için örüntü katmanında bir adet nöron tutulmaktadır. Bu nedenle, eğitim veri setinin çok büyük olduğu çalışmalarda örüntü katman yapısı örnek verilerinin sayısıyla aynı oranda büyümekte, yapılan işlem sayısı ve bellek gereksinimi artmaktadır. Bu çalışmada, GRYSA algoritmasının işlem sayısını azaltmaya yönelik olarak literatürde daha önce de denenmiş olan k-ortalama kümeleme algoritması ön-işlemci olarak kullanılmış, literatürdeki çalışmalardan farklı olarak, bu çalışmaların performansını negatif anlamda etkileyen kümeler arasına düşen test verileri bulunarak aykırı veri oluşmasının önüne geçilmiştir. Böylece, örüntü katmanındaki bellek ihtiyacı ve işlem sayısı azaltılırken, kümeleme algoritmasının eklenmesi ile performansta ortaya çıkan negatif etki büyük oranda giderilmiş ve yaklaşık %90 daha az eğitim verisi ile neredeyse aynı tahmin sonuçları elde edilmiştir. Generalized Regression Neural Network (GRNN), is a radial basis function based supervised learning type Artificial Neural Network (ANN) which is commonly used for data predictions. In addition to its easy modelling structure, being fast and producing accurate results are the other strong features of it. On the other hand, GRNN employs a neuron in pattern layer for each data sample in training data set. Therefore, for huge data sets pattern layer size increases proportional to the number of samples in training data set, memory requirement and computational time also increase excessively. In this study, in order to reduce space and time complexity of GRNN, k-means clustering algorithm which had been used as pre-processor in the literature is utilized and outlier data emergence which affects the performances of previous studies negatively, is prevented by identifying test data located between clusters. Hence, while memory requirement in pattern layer and number of calculations are reduced, negative effect on the performance emerged by the use of clustering algorithm is significantly removed and almost the same prediction performances to that of standard GRNN are achieved by using 90% less training samples.
In a general regression neural network (GRNN), the number of neurons in the pattern layer is proportional to the number of training samples in the dataset. The use of a GRNN in applications that have relatively large datasets becomes troublesome due to the architecture and speed required. The great number of neurons in the pattern layer requires a substantial increase in memory usage and causes a substantial decrease in calculation speed. Therefore, there is a strong need for pattern layer size reduction. In this study, a self-organizing map (SOM) structure is introduced as a pre-processor for the GRNN. First, an SOM is generated for the training dataset. Second, each training record is labelled with the most similar map unit. Lastly, when a new test record is applied to the network, the most similar map units are detected, and the training data that have the same labels as the detected units are fed into the network instead of the entire training dataset. This scheme enables a considerable reduction in the pattern layer size. The proposed hybrid model was evaluated by using fifteen benchmark test functions and eight different UCI datasets. According to the simulation results, the proposed model significantly simplifies the GRNN’s structure without any performance loss.
In recent years laser scanning platforms have been proven to be a helpful tool for plants traits analysing in agricultural applications. Three-dimensional high throughput plant scanning platforms provide an opportunity to measure phenotypic traits which can be highly useful to plant breeders. But the measurement of phenotypic traits is still carried out with labor-intensive manual observations. Thanks to the computer vision techniques, these observations can be supported with effective and efficient plant phenotyping solutions. However, since the leaves and branches of some plant types overlap with other plants nearby after a certain period of time, it becomes challenging to obtain the phenotypical properties of a single plant. In this study, it is aimed to separate bean plants from each other by using common clustering algorithms and make them suitable for trait extractions. K-means, Hierarchical and Gaussian mixtures clustering algorithms were applied to segment overlapping beans. The experimental results show that K-means clustering is more robust and faster than the others.
Abstract-Generalized Regression Neural Network (GRNN) is a radial basis function based neural network used for function approximation and prediction. Thanks to its easy modelling structure, and one pass learning, it has been utilized in many applications as an alternative to other prediction methods such as multilayer perceptron (MLP) and support vector machines (SVM). Since the number of neurons at GRNN's pattern layer is proportional to the number of training samples in dataset, increase in memory usage and decrease in computational time will emerge for huge datasets. Therefore, k-nearest neighbour (kNN) and clustering methods such as k-means and hierarchical clustering, etc. have been frequently used for pattern layer size reduction. Pattern layer size reduction may provide not only simplification in structure but also increase in prediction accuracy. In this work, a pattern layer size reduction approach utilizing Angle Based Nearest Neighbor (ABNN) algorithm is proposed for three-dimensional datasets. The proposed method divides training space into specific angles and for each test datum, it searches the nearest training datum within each angle. At the end, there exists a few training data that will be used in GRNN's pattern layer and these training data are similar to the test datum. Performance of the proposed method was evaluated by using fifteen benchmark global optimization test functions and compared with that of standard GRNN and a hybrid method using kNN as a pre-processor. Simulation results show that the proposed method provides 99.33% reduction in pattern layer size and accuracy is also increased maximally to 65.61%.Keywords-Generalized regression neural network, prediction neural networks, nearest neighbor, pattern reduction and reduced dataset.
Deep learning algorithms have become popular methods for pattern recognition due to their advantages over traditional methods such as providing deep representations of data, high-level semantic features. Deep convolutional neural network is one of the deep learning technique used in computer vision. Deep convolutional neural network consists of alternating convolution and pooling layers, and feedforward layers after them. It has not a fixed structure hence determining the optimal structure such as number of convolution and pooling layers, kernel size of these layers is crucial for faster and high performance implementations. Hence, in this work different convolutional neural network structures were established and tested on recognition of 28x28 MINST handwritten digits. According to the test results, kernels should cover at least 2 neighbor pixels of the current pixel from each side. Moreover, increasing the number of layers provide better results at the same time leads to decreases in the kernel size which may lead to worse performance. Hence, while the number of layers are increased, kernel size must be considered.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.