“…For example, in unsupervised learning, word embeddings often contain biases (Bolukbasi et al, 2016;Caliskan et al, 2017;Garg et al, 2018) which persist even after attempts to remove them (Gonen and Goldberg, 2019). There are many examples of bias in supervised learning contexts: YouTube's captioning models make more errors when transcribing women (Tatman, 2017), AAE is more likely to be misclassified as non-English by widely used language classifiers (Blodgett and O'Connor, 2017), numerous gender and racial biases exist in sentiment classification systems (Kiritchenko and Mohammad, 2018), and errors in both co-reference resolution systems and occupational classification models reflect gendered occupational patterns (Zhao et al, 2018;De-Arteaga et al, 2019).…”