2021
DOI: 10.48550/arxiv.2106.13814
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Training Saturation in Layerwise Quantum Approximate Optimisation

E. Campos,
D. Rabinovich,
V. Akshay
et al.

Abstract: Quantum Approximate Optimisation (QAOA) is the most studied gate based variational quantum algorithm today. We train QAOA one layer at a time to maximize overlap with an n qubit target state. Doing so we discovered that such training always saturates-called training saturationat some depth p * , meaning that past a certain depth, overlap can not be improved by adding subsequent layers. We formulate necessary conditions for saturation. Numerically, we find layerwise QAOA reaches its maximum overlap at depth p *… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…The underlying layer-wise trainability conjecture is proved to not be true in all cases [40], i.e., the optimization can be predestined to saturate prematurely without finding the global optimum. However, very recent work shows that, for QAOA, this saturation occurs only as p reaches closer to the number of qubits [41]. Thus, layer-wise optimization can outperform standard optimization for low-depth QAOA.…”
Section: Layer-wise Trainingmentioning
confidence: 99%
“…The underlying layer-wise trainability conjecture is proved to not be true in all cases [40], i.e., the optimization can be predestined to saturate prematurely without finding the global optimum. However, very recent work shows that, for QAOA, this saturation occurs only as p reaches closer to the number of qubits [41]. Thus, layer-wise optimization can outperform standard optimization for low-depth QAOA.…”
Section: Layer-wise Trainingmentioning
confidence: 99%
“…After collecting a set of samples, the bit string associated with the best solution to the combinatorial optimization problem, i.e., the bit string z that returns the minimum value of the associated cost function C(z), should be saved as the best approximate solution to the combinatorial optimization problem of interest. We note that FALQON has similarities to other quantum circuit parameter-setting protocols that involve "greedy", layer-by-layer optimization, e.g., where a classical optimization routine is used to sequentially optimize quantum circuit parameters in order to minimize a cost function in a layer-wise manner [64][65][66][67]. In fact, the parameter-setting rule given in Eq.…”
Section: Feedback-based Algorithm For Quantum Optimizationmentioning
confidence: 99%
“…In parallel to explore the initialization strategy, another crucial topic is designing advanced training strategies of QAOA to avoid local optima and accelerate optimization. Concrete examples include modifying the objective functions [35], applying the iterative training strategy [36,37] using adaptive mixing operators [34,[38][39][40]. Despite the remarkable achievements, little progress has been made in overcoming the scalability issue of QAOAs, whereas the ultimate goal of the most advanced QAOA is solving a problem with hundreds of vertices [41].…”
Section: Introductionmentioning
confidence: 99%