In this paper we apply an immune-inspired approach to train Multilayer Perceptrons (MLPs) for classification problems. Our proposal, called Gaussian Artificial Immune System (GAIS), is an estimation of distribution algorithm that replaces the traditional mutation and cloning operators with a probabilistic model, more specifically a Gaussian network, representing the joint distribution of promising solutions. Subsequently, GAIS utilizes this probabilistic model for sampling new solutions. Thus, the algorithm takes into account the relationships among the variables of the problem, avoiding the disruption of already obtained high-quality partial solutions (building blocks). Besides the capability to identify and manipulate building blocks, the algorithm maintains diversity in the population, performs multimodal optimization and adjusts the size of the population automatically according to the problem. These attributes are generally absent from alternative algorithms, and all were shown to be useful attributes when optimizing the weights of MLPs, thus guiding to high-performance classifiers. GAIS was evaluated in six well-known classification problems and its performance compares favorably with that produced by contenders, such as opt-aiNet, IDEA and PSO.