“…As regards the text encoder, we carried out the experiments with three different multilingual models, i.e., the multilingual version of BERT (Devlin et al, 2019) (mBERT) and the base and large versions of XLM-RoBERTa (Conneau et al, 2020) (XLMR-base and XLMR-large, respec-tively). In the monolingual setting, we used the following language-specific models: BERT-de 8 , CamemBERT-large (Martin et al, 2020) 9 , BERTit 10 , and ParsBERT 11 (Farahani et al, 2020), respectively, for German, French, Italian, and Farsi. As for all the other languages covered by the Word-Net datasets, i.e., Bulgarian, Chinese, Croatian, Danish, Dutch, Estonian, Japanese and Korean, we used the pre-trained models made available by TurkuNLP.…”