Abstract.Recently some novel strategies have been proposed for training of Single Hidden Layer Feedforward Networks, that set randomly the weights from input to hidden layer, while weights from hidden to output layer are analytically determined by Moore-Penrose generalised inverse. Such non-iterative strategies are appealing since they allow fast learning, but some care may be required to achieve good results, mainly concerning the procedure used for matrix pseudoinversion. This paper proposes a novel approach based on original determination of the initialization interval for input weights, a careful choice of hidden layer activation functions and on critical use of generalised inverse to determine output weights. We show that this key step suffers from numerical problems related to matrix invertibility, and we propose a heuristic procedure for bringing more robustness to the method. We report results on a difficult astronomical image analysis problem of chromaticity diagnosis to illustrate the various points under study.