2021
DOI: 10.48550/arxiv.2110.14753
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Subtleties in the trainability of quantum machine learning models

Abstract: A new paradigm for data science has emerged, with quantum data, quantum models, and quantum computational devices. This field, called Quantum Machine Learning (QML), aims to achieve a speedup over traditional machine learning for data analysis. However, its success usually hinges on efficiently training the parameters in quantum neural networks, and the field of QML is still lacking theoretical scaling results for their trainability. Some trainability results have been proven for a closely related field called… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
27
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
6
4

Relationship

3
7

Authors

Journals

citations
Cited by 16 publications
(27 citation statements)
references
References 71 publications
0
27
0
Order By: Relevance
“…Similar to their classical counterparts, the ability of QML models to solve a given task hinges on several factors, with one of the most important being the choice of the model itself. If the inductive biases [14] of a model are uninformed, its expressibility is large, leading to issues such as barren plateaus in the training landscape [15][16][17][18][19][20]. Adding sharp priors to the model narrows the effective search space and increases its performance [21][22][23].…”
Section: Introductionmentioning
confidence: 99%
“…Similar to their classical counterparts, the ability of QML models to solve a given task hinges on several factors, with one of the most important being the choice of the model itself. If the inductive biases [14] of a model are uninformed, its expressibility is large, leading to issues such as barren plateaus in the training landscape [15][16][17][18][19][20]. Adding sharp priors to the model narrows the effective search space and increases its performance [21][22][23].…”
Section: Introductionmentioning
confidence: 99%
“…) for some α > 1 [61], this means that ε = O(α −MN/2 ) so that estimations are not mere random guesses [41]. Finally, when exponential encoding is used, we get N gt < O 3 MN(1−log 3 α)/2 , telling us that a feasible quantum advantage entails α < 3.…”
Section: Qsl Models As Fflms and Their Possible Quantum Advantagementioning
confidence: 95%
“…Similar to the standard noisy and large scale variational circuits, the variational machine learning approach becomes more difficult to train as the dimension of the Fock space increases, likely due to the issue of vanishing cost function gradients [60][61][62][63][64], requiring exponentially-growing precision to optimize the circuit parameters in-situ [65]. In addition, it is expensive to train the quantum gates (in this case the tunable beam splitter meshes) in the noisy-intermediate scale quantum (NISQ) era as it is time-consuming to The classification boundaries for all datasets become more complicated as the number of input photons increases, illustrating the increasing expressive power.…”
Section: Linear Quantum Photonic Circuits As Gaussian Kernel Samplersmentioning
confidence: 99%