2021
DOI: 10.48550/arxiv.2110.05517
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learnability of the output distributions of local quantum circuits

Abstract: in the SQ model is often taken as strong evidence for hardness in the sample model.In summary, we study in this work the following problems, which are stated more formally in Section 3:Problems: PAC probabilistic modelling of quantum circuit Born machines (informal). Let C be the set of output distributions corresponding to a class of local quantum circuits. Given either sample-oracle or SQ-oracle access to some unknown distribution P ∈ C, output, with high probability, either generative modelling: an efficien… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(11 citation statements)
references
References 18 publications
0
10
0
Order By: Relevance
“…This is analogous to the classical setting where parity functions are hard to learn in the noisy SQ setting, but efficient to learn using simple linear regression [45]. Similarly, the related work of [41] showed that output distributions of Clifford circuits can be hard to learn using statistical queries, but efficient using a technique that resorts to linear regression on a matrix formed from samples of the overall distribution. More loosely, our results provide support to the basic maxim that algorithms which apply too broadly will work very rarely [80].…”
Section: Hardness Of Sq Learning Variational Function Classesmentioning
confidence: 84%
See 2 more Smart Citations
“…This is analogous to the classical setting where parity functions are hard to learn in the noisy SQ setting, but efficient to learn using simple linear regression [45]. Similarly, the related work of [41] showed that output distributions of Clifford circuits can be hard to learn using statistical queries, but efficient using a technique that resorts to linear regression on a matrix formed from samples of the overall distribution. More loosely, our results provide support to the basic maxim that algorithms which apply too broadly will work very rarely [80].…”
Section: Hardness Of Sq Learning Variational Function Classesmentioning
confidence: 84%
“…Recent results have shown that certain fundamental and rather simple classes of quantum "functions" are hard to learn in the SQ setting. Namely, (classical) output distributions of locally constructed quantum states [41] and the set of Clifford circuits [37] are hard to learn given properly chosen statistical query oracles. Following these results, we show that simple classes of functions generated by variational circuits are also exponentially difficult to learn in the SQ settings we consider.…”
Section: Quantum Statistical Query Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Our proposal bridges the gap between two important research efforts. The first is the known rigorous and theoretical standpoint, which usually lacks immediate real-world application (see e.g., [34,35]). Conversely, the other uses state-of-the-art real-world datasets, where only heuristics and approximate metrics can be proposed, but where the complexity of the models and tasks at hand blur any definite conclusions about the model generalization capacity.…”
Section: Introductionmentioning
confidence: 99%
“…We further study the classical learnability of the outcome distribution by introducing a tractability measure based on the classical parent Hamiltonian of the output distribution as a coherent Gibbs state. In contrast to other recent studies [19][20][21], which are mostly based on theoretical tools such as probably approximately correct (PAC) learning framework that is of limited relevance to neural network simulations [22,23], our measure is closely related to classical energy-based models. Combining insight from the tractability measure and from training an energy-based model, we argue that classical learnability undergoes a transition but earlier than the KL divergence (see Fig.…”
mentioning
confidence: 98%