The methods for making multi-layered neural networks(MNN) fault-tolerant by injecting intentionally the snapping of a link or a noise into links in the learning process have been studied in the literature. However, many of them considered the fault-tolerance to the snapping of links. In this paper, we consider the fault-tolerance to the weight fault, which includes the snapping of links as a special case. We take a pattern recognition problem as a typical example. To make an MNN fault-tolerant to any single or double weight faults in a certain interval or range, we inject intentionally two extreme points of a single or double fault-values in an interval or a range to the MNN during the learning. By simulation, we investigate how much MNN becomes fault-tolerant to the weight faults depending on the injected ones. The degree of faulttolerance for a n-multiple weight fault is estimated by the number of essential multiple links. An interesting result that if only two faults of the extreme points in the interval are injected, the number of the essential links becomes zcro for single faults of all the weights in the interval is obtained. This means that MNN becomes fault-tolerant to any single weight faults in the interval. Expecting that the similar result for double faults will be obtained, we inject two extreme points in two-dimensional range. As expected, the number of ?-multiple essential links has become zero in the range. This means that MNN becomes fault-tolerant to any double weight faults in the range. Finally, we analyse the internal structure of MNN by the distribution of covariance between any two inputs of a neuron in the output layer.