“…This means that not every word occurrence is considered individually (token-based); instead, a general vector representation that summarizes every occurrence of a word (including polysemous words) is created. The results of SemEval-2020 Task 1 and DIACR-Ita (Basile et al, 2020; demonstrated that overall type-based approaches (Asgari et al, 2020;Kaiser et al, 2020;Pražák et al, 2020) achieved better results than token-based approaches (Beck, 2020;Kutuzov and Giulianelli, 2020;Laicher et al, 2020). This is surprising, however, for two main reasons: (i) contextualized token-based approaches have significantly outperformed static type-based approaches in several NLP tasks over the past years (Ethayarajh, 2019).…”