A key step of using gradient descend methods to develop learning algorithms of a regular feedforward fuzzy neural network (FNN) is to differentiate max--min functions, which contain max and min operations. The paper aims at several objectives. First, investigate further the differentiation of max--min functions. Second, employ general fuzzy numbers, which include triangular and trapezoidal fuzzy numbers as special cases to define a three-layer regular FNN. The general fuzzy numbers related can be approximately determined by their corresponding finite level sets. So, we can approximately represent the input-output (I/O) relationship of the regular FNN as functions of the endpoints of all finite level sets. Third, a fuzzy back-propagation algorithm is presented. And to speed up the convergence of the learning algorithm, a fuzzy conjugate gradient algorithm for fuzzy weights and biases is developed, furthermore, the convergence of the algorithm is analyzed, systematically. Finally, some real simulations demonstrate the efficiency of our learning algorithms. The regular FNN is applied to the approximate realization of fuzzy inference rules and fuzzy functions defined on given compact sets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.