Photonic solutions are today a mature industrial reality concerning high speed, high throughput data communication and switching infrastructures. It is still a matter of investigation to what extent photonics will play a role in next-generation computing architectures. In particular, due to the recent outstanding achievements of artificial neural networks, there is a big interest in trying to improve their speed and energy efficiency by exploiting photonic-based hardware instead of electronic-based hardware. In this work we review the state-of-the-art of photonic artificial neural networks. We propose a taxonomy of the existing solutions (categorized into multilayer perceptrons, convolutional neural networks, spiking neural networks, and reservoir computing) with emphasis on proof-of-concept implementations. We also survey the specific approaches developed for training photonic neural networks. Finally we discuss the open challenges and highlight the most promising future research directions in this field. INDEX TERMS Artificial neural networks, neural network hardware, photonics, neuromorphic computing, photonic neural networks.
In the last years, the numerous successful applications of fuzzy rule-based systems (FRBSs) to several different domains have produced a considerable interest in methods to generate FRBSs from data. Most of the methods proposed in the literature, however, focus on performance maximization and omit to consider FRBS comprehensibility. Only recently, the problem of finding the right trade-off between performance and comprehensibility, in spite of the original nature of fuzzy logic, has arisen a growing interest in methods which take both the aspects into account. In this paper, we propose a Pareto-based multi-objective evolutionary approach to generate a set of Mamdani fuzzy systems from numerical data. We adopt a variant of the well-known (2+2) Pareto Archived Evolutionary Strategy ((2+2)PAES), which adopts the one-point crossover and two appropriately defined mutation operators. (2+2)PAES determines an approximation of the optimal Pareto front by concurrently minimizing the root mean squared error and the complexity. Complexity is measured as sum of the conditions which compose the antecedents of the rules included in the FRBS. Thus, low values of complexity correspond to Mamdani fuzzy systems characterized by a low number of rules and a low number of input variables really used in each rule. This ensures a high comprehensibility of the systems. We tested our version of (2+2)PAES on three well-known regression benchmarks, namely the Box and Jenkins Gas Furnace, the Mackey-Glass chaotic time series and Lorenz attractor time series datasets. To show the good characteristics of our approach, we compare the Pareto fronts produced by the (2+2)PAES with the ones obtained by applying a heuristic approach based on SVD-QR decomposition and four different multi-objective evolutionary algorithms.
Numerous problems arising in engineering applications can have several objectives to be satisfied. An important class of problems of this kind is lexicographic multi-objective problems where the first objective is incomparably more important than the second one which, in its turn, is incomparably more important than the third one, etc. In this paper, Lexicographic Multi-Objective Linear Programming (LMOLP) problems are considered. To tackle them, traditional approaches either require solution of a series of linear programming problems or apply a scalarization of weighted multiple objectives into a single-objective function. The latter approach requires finding a set of weights that guarantees the equivalence of the original problem and the single-objective one and the search of correct weights can be very time consuming. In this work a new approach for solving LMOLP problems using a recently introduced computational methodology allowing one to work numerically with infinities and infinitesimals is proposed. It is shown that a smart application of infinitesimal weights allows one to construct a single-objective problem avoiding the necessity to determine finite weights. The equivalence between the original multi-objective problem and the new single-objective one is proved. A simplex-based algorithm working with finite and infinitesimal numbers is proposed, implemented, and discussed. Results of some numerical experiments are provided
With increasing real-time constraints being put on the use of Deep Neural Networks (DNNs) by real-time scenarios, there is the need to review information representation. A very challenging path is to employ an encoding that allows a fast processing and hardware-friendly representation of information. Among the proposed alternatives to the IEEE 754 standard regarding floating point representation of real numbers, the recently introduced Posit format has been theoretically proven to be really promising in satisfying the mentioned requirements. However, with the absence of proper hardware support for this novel type, this evaluation can be conducted only through a software emulation. While waiting for the widespread availability of the Posit Processing Units (the equivalent of the Floating Point Unit (FPU)), we can already exploit the Posit representation and the currently available Arithmetic-Logic Unit (ALU) to speed up DNNs by manipulating the low-level bit string representations of Posits. As a first step, in this paper, we present new arithmetic properties of the Posit number system with a focus on the configuration with 0 exponent bits. In particular, we propose a new class of Posit operators called L1 operators, which consists of fast and approximated versions of existing arithmetic operations or functions (e.g., hyperbolic tangent (TANH) and extended linear unit (ELU)) only using integer arithmetic. These operators introduce very interesting properties and results: (i) faster evaluation than the exact counterpart with a negligible accuracy degradation; (ii) an efficient ALU emulation of a number of Posits operations; and (iii) the possibility to vectorize operations in Posits, using existing ALU vectorized operations (such as the scalable vector extension of ARM CPUs or advanced vector extensions on Intel CPUs). As a second step, we test the proposed activation function on Posit-based DNNs, showing how 16-bit down to 10-bit Posits represent an exact replacement for 32-bit floats while 8-bit Posits could be an interesting alternative to 32-bit floats since their performances are a bit lower but their high speed and low storage properties are very appealing (leading to a lower bandwidth demand and more cache-friendly code). Finally, we point out how small Posits (i.e., up to 14 bits long) are very interesting while PPUs become widespread, since Posit operations can be tabulated in a very efficient way (see details in the text).
This paper discusses the introduction of an integrated Posit Processing Unit (PPU) as an alternative to Floating-point Processing Unit (FPU) for Deep Neural Networks (DNNs) in automotive applications. Autonomous Driving tasks are increasingly depending on DNNs. For example, the detection of obstacles by means of object classification needs to be performed in real-time without involving remote computing. To speed up the inference phase of DNNs the CPUs on-board the vehicle should be equipped with co-processors, such as GPUs, which embed specific optimization for DNN tasks. In this work, we review an alternative arithmetic that could be used within the co-processor. We argue that a new representation for floating point numbers called Posit is particularly advantageous, allowing for a better trade-off between computation accuracy and implementation complexity. We conclude that implementing a PPU within the co-processor is a promising way to speed up the DNN inference phase.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.