“…In particular, variational QML algorithms may yield reductions in the number of required trainable parameters [238], generalization error [226,227,228,229], the number of examples required to learn a model [236,222], and improvements in training landscapes [277,226,227,231,232]. Evidence supporting one or more of these advantages has been found in both theoretical models and proof of principle implementations of quantum neural networks (QNNs) [226,227,228,231] and quantum kernel methods (QKM) [222,229,225]. It is notable that these methods are both closely related to VQAs leveraging gradient-based classical optimizers (indeed, they often share overlapping definitions in the literature, as briefly noted in 2019 [213]) [223,411,412].…”