“…Word Embedding tools, technologies and pre-trained models are widely available for resource rich languages such as English (Mikolov et al, 2013;Pennington et al, 2014; and Chinese (Li et al, 2018;Chen et al, 2015). Due to the wide use of Word Embeddings, pre-trained models are increasingly available for resource poor languages such as Portuguese (Hartmann et al, 2017), Arabic (Elrazzaz et al, 2017;Soliman et al, 2017), and Bengali (Ahmad and Amin, 2016).…”