Запропоновано стратифікований підхід до імітаційного моделювання програмно-конфі-гурованих мереж. Запропоновано імітаційні моделі мережі, активних і пасивних компо-нентів -контролера, комутатора, хоста та комунікаційних каналів. Придатність підхо-ду до цільового використання підтверджено шляхом співставлення одержаних результа-тів імітаційного моделювання із результата-ми емуляції мережі у середовищі Mininet Ключові слова: програмно-конфігурована мережа, імітаційне моделювання, дискрет-но-подійна специфікація системи, великі дані Предложен стратифицированный подход к имитационному моделированию программно-конфигурируемых сетей. Предложены ими-тационные модели сети, активных и пассив-ных компонентов -контроллера, коммута-тора, хоста и коммуникационных каналов. Пригодность подхода к целевому использо-ванию подтверждена путем сопоставления полученных результатов имитационного мо-делирования с результатами эмулирования сети в среде Mininet Ключевые слова: программно-конфигури-руемая сеть, имитационное моделирование, дискретно-событийная спецификация систе-мы, большие данные UDC 004.75 : 004.94
Modern approaches to distributed software systems engineering are tightly bounded with formal methods usage. The effective way of certain method application can leverage significant outcome, in terms of corresponding time costs reduction for instance. To this end the TLC model checker has been consideredwith respect to TLA+ specifications with concurrent structure. The concurrency itself has been implemented as interleaving. Two different approaches to TLC model checking have been used. The first approach is based on model checking via breadth-first state space search (BFS), the second onevia depth-first search (DFS). The main result of a paper is the new approach to increasing the effectiveness of TLC verification with respect to the concurrent structure of TLA+ specification. To analytically represent synthesized TLA+ specifications with concurrent structure, the Kripke structure has been taken. To assess the measures of state space explosion problem, taking place during the experimentation, the appropriate estimations have been proposed. These estimations have been proved during the case study. The composite web service usage scenario has been considered as a case study. The results, obtained during the experimentation, can be used to increase the effectiveness of automated TLC verification with respect to the concurrent structure of TLA+ specification.
Research and development advancements in the area of Vehicle Door Security using Smart Tag and Fingerprint System. Fingerprint biometric is one of the popular, ubiquitous, reliable, economical and efficient biometric technologies. Due to its versatility, fingerprint biometric is applicable. Fingerprint is popular because of its universality, uniqueness, permanence, acceptability, performance [1]. The Arduino as a controller between RFID Sensor, Fingerprint Sensor, Buzzer, LCD, LED and Relay. This research implemented for security purpose to protect the safety of vehicle from vehicle theft or burglary. It is very useful and important for alert the people who have vehicle to protect it from theft. This is a very important system to be implemented at the main door of vehicle. The system started to work when the user access either than one system fingerprint or smart tag to lock and unlock the door. The fingerprint system only user can access their fingerprint whereas the smart tag system can access by user or user’s intimate relative when they borrow the vehicle for emergency. The vehicle door cannot be opened when unmatched fingerprint is access or incorrect smart tag is access. Once the incorrect smart tag is access by unauthorized person, the buzzer will be activated and produce a high level of alarm sound to alert the user. The Arduino Uno microcontroller is controlled by the entire system of the project. Hence, it is easy to implement and available to use because it has a simple function, so this system can be enhancing with modern technology so it can be applying into vehicle part for secure the vehicle
Context. The problem of QoS based Web service from the list of Web services with equal or similar functionality was considered. This task is an essential part of the processes of finding, discover, matching and using Web services on the Internet due to the numerous offerings of Web services with equal or similar functionality. The reasonable selection of a suitable Web service takes into account a lot of user’s quality requirements, such as response time, throughput, reliability, cost, etc. Such a task is usually formulated as an MCDM problem, in which the parameters are the Web service quality factors and the importance degree of these factors. The object of this research is a process of selection Web services using MCDM methods, taking into accounts the user’s preferences and requirements to the Web service quality characteristics. The subject of the research is the LSP method, which, in addition to the degree of importance of the criteria used in all MCDM methods, simulates the user’s reasoning about quality, taking into account, in particular, such characteristics of the criteria as mandatory, sufficiency, desirability, simultaneity and substitutability. Objective. The objective of the work is to develop an approach for comparing the result of using the LSP method with the results of using other MCDM methods. Method. A method for calculating the weights of input criteria that are not always explicitly specified in the LSP method was proposed. For this, the conjunctive coefficients of impact are used, which are calculated as a result of the sensitivity analysis of the Web service generalized quality criterion to changes the partial quality criteria. This method underlies the proposed approach to comparing the efficiency of the LSP method with other MCDM methods, which consists of using the obtained weights as the weights of the input criteria for the MCDM methods. Results. The developed method and approach was verified experimentally. The Web service ranking produced by the LSP method was compared with the ones produced by SAW, AHP, TOPSIS and VIKOR methods. This comparison confirmed the efficiency of the proposed method and approach. Conclusions. From the obtained results of comparing the LSP method and the MCDM methods considered in this study, it follows that the proposed method and approach provide the equivalent input conditions for these methods as for the LSP method, which is a necessary condition for the correct comparison of MCDM methods. The use of the proposed approach made it possible to study the sensitivities of the considered MCDM methods. In practical applications, this approach can be used to select a suitable MCDM method. The proposed method can be useful for creating professional evaluation systems in which it is necessary to assess the importance (weights) of tens and hundreds of quality criteria.
Context. The task of production rules extraction while processing big arrays of data has been discussed. The problem of estimation of computer system used resources while extracting production rules based on parallel computations has been solved. The research object is the process of production rules extraction. The research subject lies in methods of parallel computer systems' resource planning.Objective. The purpose of the work is а construction of the model for estimation parallel computer systems resources used to solve applied problems based on the parallel method of production rules extraction.Method. The article deals with the model building of used resources estimation of parallel computer system while extracting production rules. The model for estimation of computer system used resources while executing the parallel method of method of production rules extraction is proposed. Synthesized model takes into account the type of computer system, the amount of processors involved to solving the task and the bandwidth of data transfer network. In addition, the model considers parameters of used mathematical equipment (the portions of parallel system nodes involved for production rules extraction based on decision trees, associative rules and negative selection). Also the parameters of solved application task are taken into account. They are the number of observations and the number of characteristics in a given set of data describing the results of observations of the object or process being studied. The synthesized neural model is a polyalgorithmic. It allows estimating two characteristics of parallel computer system while executing the parallel method of production rules extraction. The first one is time used. And the second one is the volume of memory used.Results. The software which implements the proposed model and allows predicting the time and the volume of memory used of parallel computer system while solving practice tasks has been developed.Conclusions. The conducted experiments have confirmed the proposed software operability and allow recommending it for use in practice for solving the problems of big data processing. The prospects for further research may include the creation of parallel methods for feature selection, as well as an experimental study of proposed model on more complex practical problems of different nature and dimensionality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.