“…There are two series of related works: invasive methods and non-invasive methods. Invasive methods, which are built on a strong assumption that the inner structure (e.g., self-attention and feedforward layers) of the PLM can be modified, includes Prefix-Tuning (Li and Liang, 2021), Bitfit (Ben Zaken et al, 2021), Child-Tuning , P-Tuning v2 (Liu et al, 2021b), LoRA (Hu et al, 2021, UnifiedSKG (Xie et al, 2022) and Adapter-based models (Rebuffi et al, 2017;Houlsby et al, 2019;Lin et al, 2020;He et al, 2021;Pfeiffer et al, 2021). Non-invasive methods, which only modify input embeddings and regard the inner structure as a black box, mostly are prompting methods (including our Input-Tuning).…”