Suffering from the semantic insufficiency and domain-shift problems, most of existing state-of-the-art methods fail to achieve satisfactory results for Zero-Shot Learning (ZSL). In order to alleviate these problems, we propose a novel generative ZSL method to learn more generalized features from multi-knowledge in semantic-to-visual embedding. In our approach, the proposed Multi-Knowledge Fusion Network (MKFNet) alleviates the semantic insufficiency problem by fusing the domain information of different knowledge, which enables more relevant semantic features to be trained for semantic-to-visual feature embedding. The proposed knowledge regularization L KR greatly improves the intersection between the synthesized visual features generated by MKFNet and the unseen visual features, which can alleviate the domain-shift problem. Empirically, we show that our approach consistently outperforms these state-of-the-art methods on a large number of available benchmarks on the generalized ZSL (GZSL).