“…W ORD embedding is a real-valued vector representation of words by embedding both semantic and syntactic meanings obtained from unlabeled large corpus. It is a powerful tool widely used in modern natural language processing (NLP) tasks, including semantic analysis [1], information retrieval [2], dependency parsing [3], [4], [5], question answering [6], [7] and machine translation [6], [8], [9]. Learning a high quality representation is extremely important for these tasks, yet the question "what is a good word embedding model" remains an open problem.…”