“…These methods aim to metalearn good settings for various hyperparameters, such as the initialization parameters, such that new tasks can be learned quickly using optimization methods. These methods vary from regular stochastic gradient descent, as used in MAML (Finn et al, 2017) and Reptile (Nichol et al, 2018), to meta-learned procedures where a network updates the weights of a base-learner (Ravi et al, 2017;Andrychowicz et al, 2016;Li et al, 2017;Rusu et al, 2019;Li & Malik, 2018;Huisman et al, 2022). SAP aims to learn good initialization parameters such that new tasks can be learned quickly with regular gradient descent, similar to MAML.…”