The growing synergy between Web Services and Gridbased technologies [7] will potentially enable profound, dynamic interactions between scientific applications dispersed in geographic, institutional, and conceptual space. Such deep interoperability requires the simplicity, robustness, and extensibility for which SOAP [4,3] was conceived, thus making it a natural lingua franca. Concomitant with these advantages, however, is a degree of inefficiency that may limit the applicability of SOAP to some situations. In this paper, we investigate the limitations of SOAP for high-performance scientific computing. We analyze the processing of SOAP messages, and identify the issues of each stage. We present a high-performance SOAP implementation and a schema-specific parser based on the results of our investigation. After our SOAP optimizations are implemented, the most significant bottleneck is ASCII/double conversion. Instead of handling this using extensions to SOAP, we recommend a multiprotocol approach that uses SOAP to negotiate faster binary protocols between messaging participants.
The training problem for feedforward neural networks is nonlinear parameter estimation that can be solved by a variety of optimization techniques. Much of the literature on neural networks has focused on variants of gradient descent. The training of neural networks using such techniques is known to be a slow process with more sophisticated techniques not always performing signi cantly better. In this paper, we show that feedforward neural networks can have ill-conditioned Hessians and that this ill-conditioning can be quite common. The analysis and experimental results in this paper lead to the conclusion that many network training problems are ill-conditioned and may not be solved more e ciently by higher-order optimization methods. While our analyses are for completely connected layered networks, they extend to networks with sparse connectivity as well. Our results suggest that neural networks can have considerable redundancy in parameterizing the function space in a neighborhood of a local minimum, independently of whether or not the solution has a small residual.1. Introduction. Some neural network techniques are, in a strictly mathematical sense, an approach to function approximation. As with most approximation methods, they require the estimation of certain (possibly nonunique) parameters which are de ned by the problem to be solved 14]. In neural network terminology, nding those parameters is called the training problem, and algorithms for nding them are called training algorithms. This nomenclature comes from analogy with biological systems, since a set of inputs to the function to be approximated are presented to the network, and the parameters are adjusted to make the output of the network close in some sense to the known value of the function.Feed-forward neural networks use a speci c parameterized functional form to approximate a desired input/output relation. Typically, a system is sampled resulting in a nite set of pairs (t; ) 2 R p R where the rst coordinate is a position in pdimensional space and the second coordinate refers to the assigned value for the point.The feedforward neural network function, also from R p 7 ! R, has a set of parameters, called weights, which have to be determined so that the input and output values as given by the sample data are matched as closely as possible by the approximating neural network. The neural network function for the i th input pattern (i = 1; 2; : : :; m) can be written succinctly in the form
Three conjugate gradient accelerated row projection (RP) methods for nonsymmetric linear systems are presented and their properties described. One method is based on Kaczmarz's method and has an iteration matrix that is the product of orthogonal projectors; another is based on Cimmino's method and has an iteration matrix that is the sum of orthogonal projectors. A new RP method which requires fewer matrix-vector operations, explicitly reduces the problem size, is error reducing in the 2-norm, and consistently produces better solutions than other RP algorithms is also introduced. Using comparisons with the method of conjugate gradient applied to the normal equations, the properties of RP methods are explained. A row partitioning approach is described which yields parallel implementations suitable for a wide range of computer architectures, requires only a few vectors of extra storage, and allows computing the necessary projections with small errors. Numerical testing veri es the robustness of this approach and shows that the resulting algorithms are competitive with other nonsymmetric solvers in speed and e ciency.
In 1980, Han 6] described a nitely terminating algorithm for solving a system Ax b of linear inequalities in a least squares sense. The algorithm uses a singular value decomposition of a submatrix of A on each iteration, making it impractical for all but the smallest problems. This paper shows that a modi cation of Han's algorithm allows the iterates to be computed using QR factorization with column pivoting, which signi cantly reduces the computational cost and allows e cient updating/downdating techniques to be used. The e ectiveness of this modi cation is demonstrated, implementation details are given, and the behaviour of the algorithm discussed. Theoretical and numerical results are shown from the application of the algorithm to linear separability problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.