“…In particular, in-context learning, where few-shot examples are provided in the input prompt of a PLM, has been found to provide valuable information in guiding generation output (Min et al, 2022;Brown et al, 2020;Min et al, 2021;Lu et al, 2021b). As a result, many recent efforts in prompting PLMs have sought to augment various natural language processing datasets (Chen et al, 2022;Sahu et al, 2022;Mehri et al, 2022;Rosenbaum et al, 2022a). Prompting has become a viable "solution" for augmentation in dialogue tasks, which have traditionally been considered challenging due to the difficulty of augmenting dialogue context (Chen et al, 2022).…”