There is a wide connection between linguistics and artificial intelligence (AI), including the multimodal language matching. Multi-modal robots possess the capability to process various sensory modalities, including vision, auditory, language, and touch, offering extensive prospects for applications across various domains. Despite significant advancements in perception and interaction, the task of visual-language matching remains a challenging one for multi-modal robots. Existing methods often struggle to achieve accurate matching when dealing with complex multi-modal data, leading to potential misinterpretation or incomplete understanding of information. Additionally, the heterogeneity among different sensory modalities adds complexity to the matching process. To address these challenges, we propose an approach called vision-language matching with semantically aligned embeddings (VLMS), aimed at improving the visual-language matching performance of multi-modal robots.