“…Few-shot learners in NLP. Significant progress has been made in developing (Devlin et al, 2019;Peters et al, 2018;Brown et al, 2020), understanding (Liu et al, 2019;Tenney et al, 2019;Belinkov and Glass, 2019;Hewitt and Liang, 2019;Hewitt and Manning, 2019;Zhao et al, 2020a;Rogers et al, 2020), and utilizing (Houlsby et al, 2019;Zhao et al, 2020b;Brown et al, 2020;Li and Liang, 2021;Schick and Schütze, 2021a;Lester et al, 2021;Mi et al, 2021a) PLMs. Brown et al (2020), Schütze (2021a), andLiu et al (2021b) show that PLMs can serve as data-efficient few-shot learners, through priming or prompting (Liu et al, 2021a).…”