This paper describes a learning multiple-valued logic (MVL) network that can explain reasoning. The learning MVL network is derived directly from a canonical realization of MVL functions and therefore its functional completeness is guaranteed. We develop traditional back-propagation to the MVL networks and drive a specific algorithm for the MVL networks. The algorithm combines back-propagation learning with other features of MVL networks, including the prior human knowledge on the MVL networks, for example, the rchitecture, the number of hidden units and layers, and many other useful parameters for the networks. The prior knowledge from the MVL canonical form can be used as initial parameters of the learning MVL network in its learning process. As a result, the prior knowledge can guide the back-propagation learning process to get started from a point in the parameter space that is not far from the optimal one, thus, back-propagation can fine-tune the prior knowledge for achieving a desired output easily. This cooperative relation between the prior knowledge and the back-propagation learning process is not always present in neural networks. The learning process in the MVL network also shares some cytology behaviors, in particular the cell adhesion, the cell aptopsis (the death of cell), and the cluster cell aptopsis (the death of cluster cells), and presents these properties in the artificial MVL network successfully. Simulation results are also given to confirm the effectiveness of the methods.