This work analyzes and applies a Stochastic One-Step (SOS) method, a noniterative and instantaneous method, for the training of feed-forward artificial neural networks. The Stochastic One-Step method makes use of stochastic weights (random values) during the training stage. An interesting finding is when the proposed method is applied to multi-input, multi-output systems, by using an amputated matrix, guaranteeing that each output variable obtains its optimal topology, such an effect is not found in traditional training methods. For the development of the methodology, simulated data (reconstruction of three surfaces) and data from real situations are used. In addition, the feasibility of the method is tested by making rigorous comparisons against traditional methods, where the proposed method was superior up to 50 % under statistical criteria. In some cases, the proposed method was able to be up to 100 times faster than traditional training methods and with prediction qualities comparable to the preferred methods. Some works presented in the literature mention that a neural network artificial single layer concealed feedforward does not have the same abstraction capacity as networks deep neural networks, therefore, the set of data from MNIST, a database commonly used in techniques of machine learning, to show that an SLFN can offer acceptable results if trained using the SOS method. The results for the MNIST were 98.15 % accurate, with a time of training of 1.27 hours, where a sweep was made from 1 up to 9'000 hidden neurons.To improve the proposed method, different factors that strongly influence its performance are analyzed, such as the range in which the parameters are initialized (stochastic weights), which have a great impact on the final performance.Since the SOS method trains neural networks quickly, it was more detailed studies possible. One of these studies was implementing preprocessing techniques such as analysis of main components, managing to reduce the dimensionality of bases of substantially large data, and at par by identifying the optimal number of principal components. Finally, a methodology to develop fast and effective training, where the user does not need to define any tuning parameters. The methodology proposal aims to establish fixed rules on how to train an artificial neural network.