The paper examines the efficiency of the application of CUDA technologies for the parallelization of the cryptographic algorithm with the public key. The speed of execution of several implementations of the algorithm is compared: sequential implementation on the CPU and two parallel implementations – on the CPU and GPU. A description of the public key algorithm is presented, as well as properties that allow it to be parallelized. The advantages and disadvantages of parallel implementations are analyzed. It is shown that each of them can be suitable for different scenarios. The software was developed and several numerical experiments were performed. The reliability of the obtained results of encryption and decryption is confirmed. To eliminate the influence of external factors at the time of execution the algorithm was tested ten times in a row and the average value was calculated. Acceleration coefficients for message encryption and decryption algorithms were estimated based on OpenMP and CUDA technology. The proposed approach focuses on the possibility of further optimization through the prospects of developing a multi-core architecture of computer systems and graphic processors.
Methods of machine learning in the medical field are the subject of significant ongoing research, which mainly focuses on modeling certain human actions, thought processes or disease recognition. Other applications include biomedical systems, which include genetics and DNA analysis. The purpose of this paper is the implementation of machine learning methods – Random Forest and Decision Tree, further parallelization of these algorithms to achieve greater accuracy of classification and reduce the time of training of these classifiers in the field of medical data processing, determining the presence of human cardiovascular disease. The paper conducts research using machine learning methods for data processing in medicine in order to improve the accuracy and execution time using parallelization algorithms. Classification is an important tool in today's world, where big data is used to make various decisions in government, economics, medicine, and so on. Researchers have access to vast amounts of data, and classification is one of the tools that helps them understand data and find certain patterns in it. The paper used a dataset consisting of records of 70000 patients and containing 12 attributes. Analysis and preliminary data preparation were performed. The Random Forest algorithm is parallelized using the sklearn library functional. The time required to train the model was reduced by 4.4 times when using 8 parallel streams, compared with sequential training. This algorithm is also parallelized based on CUDA. As a result, the time required to train the model was reduced by 83.4 times when using this technology on the GPU. The paper calculates the acceleration and efficiency coefficients, as well as provides a detailed comparison with a sequential algorithm.
The paper considers the method for analysis of a psychophysical state of a person on psychomotor indicators – finger tapping test. The app for mobile phone that generalizes the classic tapping test is developed for experiments. Developed tool allows collecting samples and analyzing them like individual experiments and like dataset as a whole. The data based on statistical methods and optimization of hyperparameters is investigated for anomalies, and an algorithm for reducing their number is developed. The machine learning model is used to predict different features of the dataset. These experiments demonstrate the data structure obtained using finger tapping test. As a result, we gained knowledge of how to conduct experiments for better generalization of the model in future. A method for removing anomalies is developed and it can be used in further research to increase an accuracy of the model. Developed model is a multilayer recurrent neural network that works well with the classification of time series. Error of model learning on a synthetic dataset is 1.5% and on a real data from similar distribution is 5%.
The problem of determining the position of the lidar with optimal accuracy is relevant in various fields of application. This is an important task of robotics that is widely used as a model when planning the route of vehicles, flight control systems, navigation systems, machine learning, and managing economic efficiency, a study of land degradation processes, planning and control of agricultural production stages, land inventory to evaluations of the consequences of various environmental impacts. The paper provides a detailed analysis of the proposed parallelization algorithm for solving the problem of determining the current position of the lidar. To optimize the computing process in order to accelerate and have the possibility of obtaining a real-time result, the OpenMP parallel computing technology is used. It is also possible to significantly reduce the computational complexity of the successive variant. A number of numerical experiments on the multi-core architecture of modern computers have been carried out. As a result, it was possible to accelerate the computing process about eight times and achieve an efficiency of 0.97. It is shown that a special difference in time of execution of a sequential and parallel algorithm manages to increase the number of measurements of lidar and iterations, which is relevant in simulating various problems of robotics. The obtained results can be substantially improved by selecting a computing system where the number of cores is more than eight. The main areas of application of the developed method are described, its shortcomings and prospects for further research are provided.
Мочурад Л. І.-асистент кафедри систем штучного інтелекту Національного університету «Львівська політехніка», Львів, Україна. АНОТАЦІЯ Актуальність. Бурхливий розвиток нанотехнологій висуває нові вимоги щодо систем електронної оптики. У сучасних електронно-оптичних системах помічена значна кількість електродів складної конфігурації з наявною геометричною симетрією. При розрахунку електростатичних полів відповідних систем вимагають високої точності обчислень. Це можна забезпечити шляхом розробки нових та вдосконаленням існуючих алгоритмів розрахунку потенціальних полів. Мета. Метою даної роботи є розробка методу редукції моделі для розрахунку електростатичних полів сучасних систем електронної оптики. Метод. Для підтвердження дієвості запропонованого у роботі методу розглянуто знаходження параметрів електростатичного поля конкретної модельної системи. Показано, що конфігурація поверхонь електродів володіє абелевою циклічною групою симетрії четвертого порядку. Знайдено матрицю перетворення Фур'є для даної групи. Вдалось, використовуючи метод редукції моделі, на основі апарату теорії груп перейти від системи чотирьох інтегральних рівнянь до послідовності чотирьох незалежних інтегральних рівнянь, де інтегрування ведеться по ¼ сукупної граничної поверхні. Максимальне (повторне) врахування наявної симетрії граничної поверхні при математичному моделюванні електростатичного поля дозволяє, в свою чергу, суттєво понизити порядок моделі-перейти до інтегрування, наприклад, по 1/16, 1/64 граничної поверхні. Результати. У роботі, не зменшуючи загальності, на прикладі конкретної модельної системи здійснено розрахунок параметрів електростатичного поля. Для наочного представлення якого використано поверхні рівного потенціалу. Результати чисельного моделювання приведені при різній варіації відомих значень потенціалу на граничних поверхнях електродів. Отримані результати можуть бути використані при проектуванні сучасних систем електронної оптики. Висновки. Розроблено метод редукції моделі для розрахунку електростатичних полів електронно-оптичних систем, який базується на граничних інтегральних рівняннях теорії потенціалу у поєднанні з апаратом теорії груп, що на відміну від існуючих методів дозволяє значно спростити громіздкість процедури чисельного аналізу параметрів електростатичного поля максимально врахувавши наявну симетрію в геометрії граничних поверхонь, уникнути числової нестійкості обчислень та отримати вищу точність розрахунків. Розширено клас систем електронної оптики, що допускають математичне моделювання на основі методу інтегральних рівнянь. КЛЮЧОВІ СЛОВА: модельна система, метод інтегральних рівнянь, абелева група симетрії, циркулянтна матриця, перетворення Фур'є, еквіпотенціальна поверхня.
Obstacle detection is crucial for the navigation of autonomous mobile robots: it is necessary to ensure their presence as accurately as possible and find their position relative to the robot. Autonomous mobile robots for indoor navigation purposes use several special sensors for various tasks. One such study is localizing the robot in space. In most cases, the LiDAR sensor is employed to solve this problem. In addition, the data from this sensor are critical, as the sensor is directly related to the distance of objects and obstacles surrounding the robot, so LiDAR data can be used for detection. This article is devoted to developing an obstacle detection algorithm based on 2D LiDAR sensor data. We propose a parallelization method to speed up this algorithm while processing big data. The result is an algorithm that finds obstacles and objects with high accuracy and speed: it receives a set of points from the sensor and data about the robot’s movements. It outputs a set of line segments, where each group of such line segments describes an object. The two proposed metrics assessed accuracy, and both averages are high: 86% and 91% for the first and second metrics, respectively. The proposed method is flexible enough to optimize it for a specific configuration of the LiDAR sensor. Four hyperparameters are experimentally found for a given sensor configuration to maximize the correspondence between real and found objects. The work of the proposed algorithm has been carefully tested on simulated and actual data. The authors also investigated the relationship between the selected hyperparameters’ values and the algorithm’s efficiency. Potential applications, limitations, and opportunities for future research are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.