2021
DOI: 10.1088/1367-2630/abf798
|View full text |Cite
|
Sign up to set email alerts
|

How to enhance quantum generative adversarial learning of noisy information

Abstract: Quantum machine learning is where nowadays machine learning (ML) meets quantum information science. In order to implement this new paradigm for novel quantum technologies, we still need a much deeper understanding of its underlying mechanisms, before proposing new algorithms to feasibly address real problems. In this context, quantum generative adversarial learning is a promising strategy to use quantum devices for quantum estimation or generative ML tasks. However, the convergence behaviours of its training p… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 47 publications
0
6
0
Order By: Relevance
“…4). This approach of building the circuit is new since in the papers that use quantum discriminators, the circuits that are used are what is called ansatz circuits (Braccia et al 2021), in other words generic circuits built with layers of rotation gates and controlled rotation gates (see 3.6 and 3.7 below for the definition of these gates). Such ansatz circuits are therefore parameterised circuits as put forward in Chakrabarti et al (2019), where generally an interpretation on the circuit's architecture performing as a classifying neural network cannot be made.…”
Section: Quantum Discriminatormentioning
confidence: 99%
See 1 more Smart Citation
“…4). This approach of building the circuit is new since in the papers that use quantum discriminators, the circuits that are used are what is called ansatz circuits (Braccia et al 2021), in other words generic circuits built with layers of rotation gates and controlled rotation gates (see 3.6 and 3.7 below for the definition of these gates). Such ansatz circuits are therefore parameterised circuits as put forward in Chakrabarti et al (2019), where generally an interpretation on the circuit's architecture performing as a classifying neural network cannot be made.…”
Section: Quantum Discriminatormentioning
confidence: 99%
“…Such ansatz circuits are therefore parameterised circuits as put forward in Chakrabarti et al (2019), where generally an interpretation on the circuit's architecture performing as a classifying neural network cannot be made. As pointed out in Braccia et al (2021), the architectures of both the generator and the discriminator are the same, which on the one hand solves the issue of having to monitor whether there is a imbalance in terms of expressivity between the generator and the discriminator; however, on the other hand,3 it prevents us from being able to give a straightforward interpretation for the given architectures. The main task here is then to translate these classical computations to a quantum input for the discriminator.…”
Section: Quantum Discriminatormentioning
confidence: 99%
“…This corresponds to a quantum circuit Born machine (QCBM) generator. [56] In this case qGAN was applied to sampling from discrete distributions, [57][58][59][60][61][62][63] preparing arbitrary quantum states, [64,65] and being demonstrated experimentally with superconducting circuits for image generation. [66] In this setting the sampling procedure is efficient, but its power depends on the register width and the generator is difficult to train at increasing scale.…”
Section: Introductionmentioning
confidence: 99%
“…This corresponds to a quantum circuit Born machine (QCBM) generator [56]. In this case qGAN was applied to sampling from discrete distributions [57][58][59][60][61][62][63], preparing arbitrary quantum states [64,65], and being demonstrated experimentally with superconducting circuits for image generation [66]. In this setting the sampling procedure is efficient, but its power depends on the register width and the generator is difficult to train at increasing scale.…”
Section: Introductionmentioning
confidence: 99%