Summary
The inferences computed on a non‐singleton fuzzy system represent a processing challenge when the number of linguistic variables and terms in a rule set are large, since they involve the use of a lot of sequential vector operations. To alleviate part of the inference process, a general‐purpose CUDA fuzzy tool for building non‐singleton fuzzy systems is presented in this paper. Here, we introduce an inference machine architecture that processes string‐based rules and concurrently executes them, based on an execution plan created by a fuzzy rule scheduler. This operation breaks down the n−ary fuzzy operations in several binary fuzzy operations which can be executed in several streams and stages. As a result, this approach lets a system both take advantage of the parallel nature of rule sets and achieve competitive speed‐up ratios without losing generality, when selecting large numbers of linguistic variables, linguistic terms, and rules. The object‐oriented nature of the proposed tool makes it an easy way to build fuzzy systems without having a deep knowledge of its architecture, as it is shown in the results of two case studies for testing the fuzzy system operation limit and detecting edges in digital images.
The main drawback of conventional tools for digital image processing is the long processing time due to the high complexity of their algorithms. This gets worse when these algorithms need to be sequentially processed with large image sets. To alleviate part of this situation, this paper introduces a general-purpose tool for massively processing large digital image sets by using Apache Spark. The proposed tool allows users to extract the image rasters and store them in any of Spark's basic distributed data representations, namely, Resilient Distributed Datasets (RDD) and DataFrame (DF), to treat all the subsequent image operations as RDD/DF transformations.Our experiments reveal that, with our proposal, it is possible to schedule and execute distributed image processing tasks in less time, in comparison with another Spark-based massive image processing tool. In these experiments, we applied several algorithms to 25 000 images (the MIRFLICKR-25000 set), reaching a maximum speedup of 54x. In addition, it was discovered that the number of images also influences the speedup, as the cluster memory is fully occupied. Therefore, we can claim that, using our proposal, more complex image processing workflows can be built and applied massively to large image sets, achieving competitive speedups.
Given the high algorithmic complexity of applied-to-images Fast Fourier Transforms (FFT), computational-resource-usage efficiency has been a challenge in several engineering fields. Accelerator devices such as Graphics Processing Units are very attractive solutions that greatly improve processing times. However, when the number of images to be processed is large, having a limited amount of memory is a serious problem. This can be faced by using more accelerators or using higher-capability accelerators, which implies higher costs. The separability property is a resource in hardware approaches that is frequently used to divide the two-dimensional FFT work into several one-dimensional FFTs, which can be simultaneously processed by several computing units. Then, a feasible alternative to address this problem is distributed computing through an Apache Spark cluster. However, determining the separability-property feasibility in distributed systems, when migrating from hardware implementations, is not evident. For this reason, in this paper a comparative study is presented between distributed versions of two-dimensional FFTs using the separability property to determine the suitable way to process large image sets using both Spark RRDs and DataFrame APIs.
In this paper, the optimal position control of an underactuated robotic finger is presented. Two trajectories, one for the proximal and the other for the medial phalanx, are proposed in order to emulate the finger’s flexion/extension movements. A Mandani fuzzy control is proposed due to the lack of a precise dynamical model of the system. In order to obtain the control parameters, an optimization strategy based on the membership functions is applied. Genetic algorithms (GA) are commonly used as an optimization method in diverse applications; however, in this case, the use of an autoadaptive differential evolution method is proposed in order to obtain a superior convergence behavior. Simulations of the virtual prototype are carried out using MATLAB/Simulink software to display the trajectory tracking. The results show that the maximum error between the proposed and obtained trajectories is 3.1352E − 04 rad.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.