2018
DOI: 10.1103/physreva.97.052333
|View full text |Cite
|
Sign up to set email alerts
|

Automatic spin-chain learning to explore the quantum speed limit

Abstract: One of the ambitious goals of artificial intelligence is to build a machine that outperforms human intelligence, even if limited knowledge and data are provided. Reinforcement Learning (RL) provides one such possibility to reach this goal. In this work, we consider a specific task from quantum physics, i.e. quantum state transfer in a one-dimensional spin chain. The mission for the machine is to find transfer schemes with fastest speeds while maintaining high transfer fidelities. The first scenario we consider… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
40
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 73 publications
(41 citation statements)
references
References 107 publications
(145 reference statements)
0
40
0
1
Order By: Relevance
“…The long-term goal of the agent is to maximise the cumulative expected return, thus improving its performance in the longer run. Shadowed by more traditional optimal control algorithms, Reinforcement Learning has only recently taken off in physics (Albarran-Arriagada et al , 2018; August and Hernández-Lobato, 2018; Bukov, 2018; Bukov et al , 2018; Cárdenas-López et al , 2017; Chen et al , 2014; Chen and Xue, 2019; Dunjko et al , 2017; Fösel et al , 2018; Lamata, 2017; Melnikov et al , 2017; Neukart et al , 2017; Niu et al , 2018; Ramezanpour, 2017; Reddy et al , 2016b; Sriarunothai et al , 2017; Zhang et al , 2018). Of particular interest are biophysics inspired works that seek to use RL to understand navigation and sensing in turbulent environments (Colabrese et al , 2017; Masson et al , 2009; Reddy et al , 2016a; Vergassola et al , 2007).…”
Section: Discussionmentioning
confidence: 99%
“…The long-term goal of the agent is to maximise the cumulative expected return, thus improving its performance in the longer run. Shadowed by more traditional optimal control algorithms, Reinforcement Learning has only recently taken off in physics (Albarran-Arriagada et al , 2018; August and Hernández-Lobato, 2018; Bukov, 2018; Bukov et al , 2018; Cárdenas-López et al , 2017; Chen et al , 2014; Chen and Xue, 2019; Dunjko et al , 2017; Fösel et al , 2018; Lamata, 2017; Melnikov et al , 2017; Neukart et al , 2017; Niu et al , 2018; Ramezanpour, 2017; Reddy et al , 2016b; Sriarunothai et al , 2017; Zhang et al , 2018). Of particular interest are biophysics inspired works that seek to use RL to understand navigation and sensing in turbulent environments (Colabrese et al , 2017; Masson et al , 2009; Reddy et al , 2016a; Vergassola et al , 2007).…”
Section: Discussionmentioning
confidence: 99%
“…In this paper, we adopt a radically different approach to this problem based on machine learning (ML) [40][41][42][43][44][45][46]. ML has recently been applied successfully to several problems in equilibrium condensed matter physics [47,48], turbulent dynamics [49,50] and experimental design [51,52], and here we demonstrate that Reinforcement Learning (RL) provides deep insights into nonequilibrium quantum dynamics [53][54][55][56][57][58]. Specifically, we use a modified version of the Watkins Q-Learning algorithm [40] to teach a computer agent to find driving protocols which prepare a quantum system in a target state |ψ * starting from an initial state |ψ i by controlling a time-dependent field.…”
Section: Introductionmentioning
confidence: 99%
“…Except for these traditional routes, recently the deep reinforcement learning (RL) [37] shows a wide applicability in quantum control problems [38][39][40][41][42][43][44][45][46][47][48][49][50][51]. For example, how to drive a qubit from a fixed initial state to another fixed target state with discrete pulses by leveraging the deep RL [52] has been investigated [36].…”
Section: Introductionmentioning
confidence: 99%