“…Our second clustering algorithm, affinity propagation, has the advantage of finding the number of clusters automatically: it splits the data into exemplars and instances, exemplars being representative tokens of their instances, the non-exemplar tokens in the same cluster. As Pivovarova et al (2019) point out, 'Affinity Propagation has been previously used for several NLP tasks, including collocation clustering into semantically related classes (Kutuzov et al, 2017) and unsupervised word sense induction (Alagi c et al, 2018)'. Given that, just as in the above-cited article, we lacked a gold standard, we used standard hyperparameters 27 as available in the scikit-learn package (Pedregosa et al, 2011).…”